fbpx

Migrating an ASP.Net application to Windows Azure

Azure: It’s not just a nice color

Windows Azure is Microsoft’s entry into the cloud. Microsoft was a little late to really get into the Cloud Computing party when compared to some cloud heavyweights like Amazon AWS (S3, EC2) and Google Apps/App Engine but nevertheless Windows Azure is a very strong entrant in its own right. It is intended to simplify IT management and minimize up-front and ongoing expenses. To this end, Azure was designed to facilitate the management of scalable Web applications over the Internet. The hosting and management environment is maintained at Microsoft data centers. Windows Azure can be used to create, distribute and upgrade Web applications without the need to maintain expensive, often underutilized resources onsite. New Web services and applications can be written and debugged with a minimum of overhead and personnel expense. Azure enables you to easily scale your applications up or down to any size. You only pay for the resources your application uses. Azure is deployed across multiple data centers around the world, enabling you to deploy your applications closer to your customers. The key value for Microsoft shops is that it has unique features for the .NET stack, and is very competitively priced compared to the market leader Amazon or others. It is a compelling value for Startups, smaller shops, or larger corporations who want to host an application (or a proof of concept) at a low entry price with high availability, and possible quick upward scaling.

Why we choose Windows Azure

For the client we implemented Azure for, we had previously built an ASP.NET web application that saw growing traffic; domestically and then internationally. Our client also wanted to expand their product base. We were hosted at the time on a beefy single box dedicated environment. This posed a teaser for technical team, on how the web site could be scaled with increased usage and how the web site will perform with increased data and traffic load. We wanted a hosting environment that would scale up with the increase of site usage and also scale out with the increase of data storage. Of course, we could move to a multi box environment with load balancing etc. but we were already paying large hosting fees for an application that had a lot of users but whose traffic patterns were sporadic (mainly geared around when new products/data came out). We felt it was a good time to look at Cloud computing as a good technical as well as economical alternative.

After doing the cost analysis for upgrading the physical web server and database server to get the performance and scalability, we decided to go for Cloud environment. (More details to come in a follow up post.) Here are some of the immediate benefits:

  1. Cloud allows you to decrease your costs for building and expanding your on-premises resources
  2. Depending on how you use it but especially for Platform as a Service or Infrastructure as a Service options, it really cuts down the maintenance and support side of traditional hosting with not having to deal with SQL Server maintenance, OS level patching, IIS settings etc.
  3. Dynamic scaling: based on increased usage of site we don’t have to invest in new servers, we can spin off new VM’s on an as-needed basis. You pay for what you use.
  4. Easy deployment with no downtime
  5. Add-on features like: Dedicated Caching processes, international regional availability, Data Analytics, Hybrid Services, etc.
  6. Data replication/backup etc. is behind the scenes and automatic

Once we decided that it was worth serious investigation, we decided to explore its use as a Disaster Recovery environment. Then we jumped right into the nuts and bolts. Below are some of our learnings.

Web application Migration Process:

Before you begin you should have installed Windows Azure SDK. You can click here to download Azure SDK.

Migrating existing ASP.Net application to run on Windows Azure platform is a 3 step process:

  1. Create New Windows Azure project (Open the Visual Studio in Administrator mode).
  2. Alter configuration values.
  3. Deploy the application to cloud environment.

Create New Windows Azure project:

  1. Open the Visual Studio in Administrator mode. Create a new project by clicking File–> New Project.
  2. Select C# –> Cloud –>Windows Azure Cloud Service AzureBlog_Image1
  3. Add “Roles” to your project. Our ASP.Net web application was using cache, so I created a new Azure project with 2 Roles a. Web Role b. Cache Worker Role (If you are not making use of shared Caching, then you don’t need this Role) AzureBlog_Image2
  4. To include existing web project you can skip above step and from the Solution explorer, right click to include you project as “Web Role” AzureBlog_Image3
  5. Create a cloud service using Quick Create AzureBlog_Image4

Alter configuration values

  1. Change configuration values like connection string pointing to correct database. If you are using Set the Number is instances to run in cloud AzureBlog_Image5

Deploy the application to cloud environment

Following are the steps you need to go through to deploy a cloud application using visual studio 2012.

  1. Right click on project and click “Publish”. AzureBlog_Image6
  2. Publish wizard will open up AzureBlog_Image7

Select the subscription and click Next

Select the Cloud Service, Environment, Build and Service configuration

AzureBlog_Image8 Click Publish to “Publish” the website.

Windows Azure Migration Gotchas!

Even though migration process to Azure is relatively simple there are some minor issues that can crop up because the Azure environment isn’t exactly the same as traditional environments. Here are some of the learnings while migrating the website.

  1. Error when passing string[] to webAPI method: I was passing string[] to a webAPI method and after the conversion, this code started throwing an error. Solution: Convert parameter type from string[] to dynamic. AzureBlog_Image9
  2. “Web.config transformation: Unrecognized attribute ‘xmlns:xdt’. Note that attribute names are case-sensitive” Error. Solution: Delete the content of “Obj” folder.
  3. “ErrorCode:SubStatus:The connection was terminated, possibly due to server or network problems or serialized Object size is greater than MaxBufferSize on server”. Solution: Before storing the objects in Windows Azure Cache, serialize the object and find out the size of it and if the size is above 8 MB, split the object into smaller chunks, may be byte arrays, etc and then store the byte arrays in Cache.

References:

  • http://msdn.microsoft.com/en-us/library/windowsazure/hh694036.aspx#goodfit_benefits
  • http://www.windowsazure.com/en-us/overview/what-is-windows-azure/

Use of RavenDB and why it was a perfect fit for storing /querying JSON

As mentioned in Part I, one of our clients wanted to go beyond general web site statistics and traffic. In this article we will talk about what we did for advanced reporting statistics and how we did it (with code samples).  RavenDB_Image1    Image 1

A windows service downloads data from Google Analytics and inserts it into a RavenDB NoSQL database. This allowed us to create a wide variety of advanced reports. A Windows service was configured to run every night to download daily activity log from Google Analytics site using their download API.

Before you can download the data you need to:

  1. Register your application using Google APIs console .
  2. After you’ve registered, go to the API Access tab and copy the “Client ID” and “Client secret” values, which you’ll need later. Click here to get more info on how to get Client ID and “Client secret” key
  3. Following code snippet shows how to authenticate Google API and call Get method to get data from Google Analytics. RavenDB_Image2 4) Once you have the data as JSON objects next step is to insert it into database. We chose RavenDB to store JSON object for further data processing

Why RavenDB?
RavenDB is a relatively new, open source document database for .NET. As we are storing JSON objects in Google Analytics it made sense to keep data in JSON format and save it in document database. It made retrieval of information fast and easy (more detail below). We had a choice in another popular document database – MongoDB – but we chose RavenDB over MongoDB because:

  • RavenDB is built in C#
  • RavenDB supports batch transaction
  • Optimistic concurrency
  • Supports full-text query
  • Support static and ad-hoc indexes
  • Supports triggers
  • Rest API

Setting up RavenDB: The RavenDB server instance can be instantiated in several ways:

  • Running the Raven.Server.exe console application (located under /Server/ in the build package).
  • Running RavenDB as a service.
  • Integrating RavenDB with IIS on your Windows based server.
  • Embedding the server in your application.

A great source of information on setting up RavenDB database is at:

Retrieving data from RavenDB:

  1. Connection to RavenDB: There are 4 modes to set up RavenDB data source. We configured the RavenDB in server mode. Following code snippet shows how to connect to RavenDB in server mode. RavenDB_Image3.jpg
  2. Querying database to retrieve data: The built-in Linq provider implements the IQueryable interface. This makes it very easy to retrieve data by writing Linq queries. Here is an example of getting all the Companies From RavenDB database using Linq (visit RavenDB knowledge base for more details) RavenDB_Image4.jpg
  3. Creating Custom reports: Criteria for filtering is based on user input and data from the database is accessed and parsed into c# object using Linq queries. Data calculations and graph was generated at runtime. RavenDB_Image5.jpg
  4. Creating graphs: We used Rickshaw toolkit for creating interactive graphs.  RavenDB_Image6.jpg

For the client, we were able to use this data to create Timeline reports by pulling past usage data and comparing it to current data over a variety of time periods for practically every action a user can perform on the site.

Thus, by using JSON and Knockout for binding and Rickshaw toolkit for graphing, we were able to create scalable, customizable and interactive report. RavenDB played a key role by providing us fast and reliable DB source.

Let me know if you have questions or comments on our experience!

References:

  1. Google APIs console
  2. http://ravendb.net/
  3. http://code.shutterstock.com/rickshaw/

Picture cortesy of Gosa Postoronca

Want to make routine releases more bug proof? Consider implementing The Change Map

Problem statement:
All software development teams (whether Agile or not) entering the maintenance phase are charged with keeping application/code quality high, while rolling out frequent incremental updates. Now unit test coverage (nUnit, mbUnit, phpUnit etc.) and automated functional test coverage (viz. Selenium, QTP, Win Runner etc.) can ease this burden. However, it is not feasible to have 100% test coverage and then to keep these tests updated ongoing at a high coverage rate. A savvy Dev Lead will use instinct and experience to know how to best deploy Dev and QA resources while minimizing risk. (Check back for our follow up article on criteria to optimize test automation)

At the same time, it is also cost prohibitive to run every possible manual test case every time regardless of scope of the release across multiple environments. Manual tests are cheap to write/maintain but expensive to execute repeatedly. (This is, of course, the exact reverse of the cost/benefit proposition for automated tests.) Given that there will always be some manual testing involved in a release, technical leads must decide on the scope of testing for every release. Decision-making of what the scope is for the dev team is often a careful, deliberate, and explicit process. But when it comes to the scope of testing though, very often QA leads decide this implicitly based on the items addressed in a release e.g. if a bug in Feature X was addressed then Feature X needs be tested. Most times that works well if you have experienced QA leads. Every now and then though, the code fix for Feature X may be made in a component A that is reused in Feature Y. Throw in a minor oversight and… Voilà! You have a bug introduced in Feature Y that no one thought to test with a clear path to production.
So what is the solution?
Enter the Change Map. This is nothing more than a simple Traceability Matrix that explicitly identifies/references the test cases to be executed for a change request, typically in the context of a ticketing system.
Here is how it works: When a new change request (or bug) is approved for development, a Change Map should be identified. This is done through a conversation between Functional owner, Solution Lead, developer(s), QA lead, and testers to identify what are the areas/modules which will get affected by this change. It should be done first when the desired functionality is discussed, and then revalidated when the technical design approach has solidified.

ChangeMap_Blog image 1

Not sure how it would work? Let’s play “What was my team thinking?!”
Let us consider this interaction occurring in the context of a role based web application that is currently in production.

Client (Day 1): “We want to add a new role that has access to XYZ. Should be a simple change right?”
Business Analyst (Day 1): “Yes, our application already supports role based access. This change is straight forward for implementation.”
Tech Lead (Day 2): “Hmm… one of the areas we are restricting wasn’t considered a separate restricted resource before. We may need to update the base class to recognize this sub-feature as a separate feature”
Developer (Day 2): “Yeah, I know where to do that. Done.”
QA Lead (Day 3): ”We need to test this new role thoroughly”
Tester (Day 3): “New role works fine!”
PM (Day 4): “We are a GO for the release.”
Client (Day 5): “My call center is deluged with existing users unable to use the system! You just broke everything, I thought you said this was an easy change?!”

In the above scenario, the change to add a role introduced a bug into a pre-existing role and the bug had a clear path to production since no one brought up that other roles need to be tested. Here is what could have been different if we had used the Change Map:

Client (Day 1): “We want to add a new role that has access to XYZ. Should be a simple change right?”
Business Analyst (Day 1): “Yes, our application already supports role based access. This change should be straight forward for implementation but I will get back to you once my Tech Lead and Tester review it.”
Tester (Day 2): “Well, I have my test cases drawn up for the new role. That part is easy, but Mr. Tech Lead can advise if anything else needs to be added to the Change Map”
Tech Lead (Day 2): “Hmm… one of the areas we are restricting wasn’t considered a separate restricted resource before. We may need to update the base class to recognize this sub-feature as a separate feature. Ms. Developer, can you validate that and if necessary add those to the Change Map”
Developer (Day 2): “Yeah, I know where to make that change. And I’ve updated the ticket with the additional Change Map test cases.”
QA Lead (Day 3):”We need to test this new role thoroughly, as well as the additional test cases called out in the Change Map”
Tester (Day 3): “New role works fine but older roles are broken!”
Developer (Day 3): “Ah! Missed a spot, there you go”
PM (Day 4): “We are a GO for the release.”
Client (Day 5): “New role works fine. Here is another change I need…”

How does this help?
1. Codify decision making criteria: The main value here was to make the implicit process of deciding what gets tested explicit and involve non-QA actors in the decision making process
2. Improve communication/cross role understanding: Some minor tweaks to the mental framework of team members to think as a team and hold each other responsible for considering downstream effects of code changes can reap significant benefits
3. Continuous process improvement: If bugs still sneak through, perform an After Action Review to determine what updates need to be made to #1 to prevent recurrence

NOTE: Change Map should be a light weight addition to the ticketing process. When in doubt, players should be looking to broaden testing scope rather than narrow it.

Happy Change Mapping!

6 New Fundraising Traps in a Personalized World

The inbox is getting competitive. Your best may no longer be good enough. The rapid adoption of Customer Relation Management (CRM) technology is making slow adopters appear backward. With players like Amazon, Netflix, and Facebook serving up interest-specific, personalized content on a minute-to-minute basis, the expectation for custom treatment is skyrocketing. Is it fair that your donors are comparing you to billion dollar marketers? No, but it is a reality. That said, here are a few traps to watch for as you wade into the CRM space with a personalized offering:

1. Marry You? How About We Get A Drink First.
 The first trap is a failure to provide opportunities for low-commitment interaction. Yes, we know everyone wants high commitment from their target groups. But for that to happen you must nurture a community of donors over time, rather than try to harpoon new donations by blitzing everyone repeatedly. However sophisticated your appeals may be, without a funnel of engagement that allows you to attract a wide audience of new prospects, your “dating pool” will soon thin out. Use your communication forums, especially social media, to start conversations and gradually grow prospects’ commitment as they funnel up through ever-more-meaningful interactions. For example, a higher education prospect for a college may start by signing up for information relevant to them, then “like” your Facebook page, later they show up at an event, accept an invitation to volunteer, and finally they donate.

2. Money, Money, Money, That’s All I Ever Hear From You
 The 2nd trap is that the same tools that make it easier to pinpoint donors make it easier to ask too often. Are you engaging in meaningful conversations that involve regular opportunities for alternative interactions? You can find creative ways to tap your prospects’ professional skills, networks, and resources rather than only cash contributions. Social networks are great forums for engaging a non-profit community around volunteer activities and asking them to spread the word on your behalf. Remember that you’re building long-term relationships. Share your successes, send birthday cards, and remember that ongoing pursuit of the optimal communications strategy is the new normal. You’re going to have to test. It’s going to take time—even simple things like message frequency. The right message frequency for one demographic may be far too frequent for another and not frequent enough for a third. What works today will undoubtedly change, and you will have to commit to staying one step ahead in the learning curve.

3. No, You’re Thinking Of My Roommate
 Are you engaging prospective donors with the programs that interest them most? When you make it an organizational priority to identify donors’ interests and track them in your CRM, you gain the ability to solicit them for giving in a much more nuanced way. But you also open yourself up to spectacular organization failures. Very specific pitches mistakenly aimed at the wrong target can be startling and even offensive. These failures find their way onto social media sites—the worst even make the news. And that can crush all of your careful planning. If you have followed through on personalization, you’ve now got dozens of variations of communications to write across different mediums and even devices. And you have to make sure they encompass a full communication progression without any odd repetition or embarrassing gaps.

4. Do You Even Know Me?
 Are you tracking capacity for giving? Don’t ask people to give at the wrong level. You will alienate even loyal supporters if you repeatedly make this mistake. Also, don’t ignore the power of a large number of micro-donations. Political campaigns are won and lost on the level of micro-donations. Data is the premium fuel you need to run this fancy new car. It’s expensive, you need a lot more than you used to, and it constantly needs replacing. That means systems and a commitment of internal resources to ongoing data retrieval and organization. You can’t address someone with messages specific to their life circumstances if you aren’t certain what those circumstances are. And remember, the best data in the world can’t help you if it doesn’t get in your CRM at all (or once there, isn’t accessible easily). It’s a particularly precarious place to be if you are an institution positioning itself as a thought leader. For instance, university alumni may question whether you are really up to the challenge of preparing students for the careers of tomorrow if you do not appear to be technologically savvy in communicating.

5. Will You Respect Me In The Morning?
 So you worked people through the funnel and converted them to donors. Well done, but the story doesn’t end there. If you don’t tweet about your use of the donations and show that you are delivering value on them, or you fail to be sensitive to the input from your community of supporters, you can quickly lose engagement. When incoming communications aren’t treated with the same tone, interest and passion that you show during outreach, trust breaks down and relationships are dented. The 5th trap of inadequate ongoing interaction can be addressed by engaging and responding to donor communications and continuing to keep your understanding of individual donor preferences updated. You must be sure that you form a cohesive strategy in which everyone on your team is prepared with the information they need to engage successfully. CRMs are robust systems with profound capabilities. That also means they are prone to inadequate adoption if they aren’t treated as an operational priority with a matching change in communication philosophy by every team member.

6. And All I Got Was A Lousy T-Shirt 
Are your thanks limited to postcards and trinkets? When you’ve asked for something specific by targeting someone’s unique interests, your thanks need to be equally personalized. Do this by creating a variety of appreciation options in your recognition programs. For example, a university might provide gym access to local alumni donors or allow networking opportunities. How about allowing a donor’s family member who is also a budding model into your next commercial or print campaign? Be creative. You will be surprised at the win-win options you can find.

To conclude: We covered quite a few pitfalls, but as we also saw, there are actionable ways you can mitigate your risk. The sophisticated use of data in marketing communications is exploding, so view this technology as long term and strategic. Then commit to integrating the CRM (and the philosophy it implies) into the fabric of your organization. It is easier said than done but it is not rocket science. It is all about getting to know your supporters better… and in fund raising where it is all about relationships, do you have a choice?

Google Analytics as a light weight web analytics and audit tool

One of our clients wanted go beyond general web site statistics and traffic, Client wanted a more of a combination audit tool to provide detailed information about:

  • Who is using their site? (tracking usernames/roles)
  • Click paths and search terms on the site
  • What information they are looking at or downloading as paid subscriber?
  • What they are using to access the site?information they want to look at.
  • And even how many times users have to scroll down the page to get the page to get the information they want to look at.

We started out with researching for tools available out there to log this data and give analytical reports. Here are some of the options we looked at:

  1. Software Offered by Your Web Host
  2. Site Counters
  3. Google Analytics
  4. Advance web analytics tools
    a. Lyris HQ Agency Edition
    b. WebTrends
    c. SiteCatalyst

Google Analytics proved to be the best pick for our purpose as we were looking for an effective and low cost solution but also with powerful means to extend and customize the analytics. Google Analytics service offered basic out of the box reports for:

  1. Visits by Browser
  2. Visits
  3. Real Time report on Location, Traffic sources

This gave us a bunch of statistics and reports for free. Then we used Google Analytics API to capture detailed user actions like:

  1. Products added to favorites
  2. Product view based on filter and search result
  3. Page view
  4. Report download

You can set it up in 3 simple steps:
1. First register at the Google Analytics site using your google ID.
2. At the end of registration process, Google Analytics will provide you with JavaScript which you need to copy on every page you wish to track
3. Once you have tracking JavaScript code added into your web pages, your site start gathering data. You can view basic Site usage reports by log-in into Google Analytics site.

Design approach (Image 1) we took to accomplish the task which had 3 parts:

GA_Blog_V1.3

  1. Generate summary reports within Google Analytics and leverage out of the box reports. GA_Blog_V1.3_2
  2. For detailed user level action reports we captured custom action and events in JSON format and logged data in Google Analytics. Wrote a nightly service to download the data through Google Web API. Data was downloaded and kept in a NoSQL database (RavenDB). The JSON originally generated had all the information captured such as unique GUID to identify user and link all the activity, log in time, logout time, values on which user searched on, filters used by user, tab clicked etc. All the information was stored in JSON format as key value pairs.

JSON data also had a header which contained basic common info across pages and events. E.g.

GA_Blog_V1.3_3

This design gave the flexibility of adding more parameters if we wanted to capture more information for “Login” user action. This JSON was created on client browser and then tracked using Google Analytics event tracking call.

GA_Blog_V1.3_4

Google provide very nice WebAPI interface to download data, windows service was written to download the data on daily basis using this WebAPI. This Service downloaded data using Google Analytics webAPI interface and inserted data into NoSQL database. From this database we generated detailed reports which “Client” requested. With all the user event logged in JSON we were able to come up with all kinds of reports:

  1. Over all Company usage
  2. Users behavior within companies
  3. Detailed user usage with a session
  4. All the user action performed by user with values they wanted to Filter/searched on

    Thus we were able to user Google Analytics as Audit tool and provide a flexible and detailed reports at low cost.

Check back for our upcoming article on our use of RavenDB and why it was a perfect fit for storing /querying JSON.

References: 1. https://developers.google.com/analytics/resources/concepts/gaConceptsAccounts

Definitive Guide to Mobile Application Development

How to Decide Between a Native, Hybrid, or Mobile Web Application, or a Responsive Website

In a previous article, HTML5 vs Native for Web Sites and Applications, I discussed some of the considerations in play when you are in the decision making process for your next web application. I hinted that the decision is not easy and there are lots of factors to consider (e.g. budget, technical resources, feature set). Not finding something that helps organize the decision criteria online, I’ve pulled together a “decision grid” below to try and simplify these factors down and help you make a good decision, whether you are a mobile startup, or gradually moving to mobile, or a business utilizing a mobile first strategy.

Reading the decision grid: A tick means a clear preference for that option. An asterisk means that the option would work but is not the obvious choice. No tick, of course, means that the option is not preferred if that particular criterion is important to you.

[table id=1 /]

[table id=2 /]

Since hybrid mobile apps must be distributed through app stores, by definition they cannot offer the maximum breadth of support for the myriad of internet-capable devices. But, they offer much broader support than a native app.

[table id=3 /]

Extreme budget, time and skill level constraints will be best managed by sticking with a browser-based solution. Hybrid approaches add complexity.

Tech Notes

Hybrid Native vs Hybrid WebView

There are two basic kinds of hybrid web application framework. We’re using “Hybrid Native” for platforms like Appcelerator Titanium that blend the ability to use native UI elements with interpreted JavaScript, and “Hybrid WebView” for platforms like Cordova/PhoneGap that package HTML, CSS and JavaScript into what is essentially a full-screen browser window that exists in app form with full access to device hardware capabilities.

Mobile Web Application vs Responsive Website

These designations can be somewhat confusing and misleading. Both of them refer to a website that is visited in a browser on a variety of devices (desktops, tablets, mobile phones, and everything in between). Is the site a task-oriented application that was designed exclusively for mobile? We’re calling that a Mobile Web Application. Is it more of a regular website (company site, blog, etc) designed to work on desktop as well as mobile? We’re calling that a Responsive Website. The truth is most websites / web applications exist on a continuum somewhere between these two designations, and responsive design techniques are used even when creating a mobile-only web application since form factors vary so much.

Dive Deeper

Your Thoughts?

Do you agree or disagree with this assessment? Did I miss something big? How are you making this decision on your project? Please add your comments below.

Top 10 Gotchas to watch out for when designing HTML for email or eNewsletters

One might reasonably expect that if you write clean and simple HTML, that works across browsers with no problem that it would work fine with email too, right? The answer is a resounding “Nope”. Whether you send HTML email for product/service marketing, news aggregation and dispersal, non-profit marketing, Alumni marketing or other you need to know the pitfalls of rolling your own HTML newsletters.

brokenEmail

Email clients are all over the map and much more limited than far downstream browsers. This makes writing HTML for email delivery a veritable minefield of trial and error (e.g. Outlook actually uses MS Word for rendering!)

So from my trials, here are some of the most common gotchas I’ve found when designing HTML emails:

  1. All CSS should be “inline”. This is a requirement; many webmail clients will completely strip the head and body tags, in addition to classes on html elements.
    Reference:
    • http://kb.mailchimp.com/article/how-does-the-css-inliner-work
    • http://kb.mailchimp.com/article/css-in-html-email
  2. A “doctype” should be added (like Transitional) and also the content-type meta tag (like UTF-8).
    Reference:
    • http://mailchimp.com/resources/guides/html/mailchimp-for-designers/
  3. A wrapper div is required. The width should not be set on the body tag, but set on the wrapper div. The standard recommended maximum width for HTML emails is 600px. [Not that fix width may result in iPhone formatting issues.]
  4. Use “reset” styles to override a browser’s/email client’s default styling of certain elements. Apply the reset styles to the “wrapper div”.
    Reference:
    • http://meyerweb.com/eric/thoughts/2007/05/01/reset-reloaded/
  5. Avoid br, hr tags and line-height attribute tags as the default line height is different for each email client.
  6. Avoid padding attribute, as Outlook doesn’t understand padding.
  7. To enforce line-height and/or spacing between two elements, “margin” attribute is the most reliable.
  8. Make sure all images have “alt” tags. In addition, make sure the image container’s size is set appropriately, so that the over-all size of the email remains intact.
  9. Last but not the least; never assume the basic things, which are granted in web design, like an anchor tag color or the default font color. This is important because there is no such thing like a default color/font, because each email client would have its own.
  10. Last but not the least; never assume the basic things, which are granted in web design, like an anchor tag color or the default font color. This is important because there is no such thing like a default color/font, because each email client would have its own.

Bonus: Pixels are always preferred instead of ems and %s. Note that the iPhone and iPad set a minimum default font size of 13px. This can be overridden, but with caution.
Reference:
• http://www.campaignmonitor.com/blog/post/3447/should-i-use-em-or-px-when-coding-for-html-email/
• http://www.campaignmonitor.com/blog/post/3339/save-your-layout-by-overriding-the-minimum-font-size-on-the-iphone-and/

That’s the list! Please add to the comments with any others that you may have found.

High Performance through Asynch operations using Symfony Background Processing

The Problem aka Opportunity:

Web Users expect high performance, who knew?

While standard performance monitoring/tweaking on pages can make improve performance, some requests are bound to take a while due to some slow, blocking operation that is out of your control. Often these operations don’t impact the response going back to the client – think sending email – and thus can be cut from the request all together. Let’s analyze this scenario with an example.

The Solution – Procrastination:

The answer is doing only the bare minimum needed to return a response, and doing the rest of the processing out-of-band or asynchronously. This can be broken-down into three steps:

  1. Instead of calling a slow task, the request will simply persist an event to a queue in the database
  2. A background process watching the event queue will claim the event and dispatch it to any listeners
  3. The listeners preform any slow, blocking tasks

The idea is not new: asynch calls, batch processing, message queuing, etc have existed for a while. In this post we’re taking a look at creating a simple and generic background processing system using Symfony 2, its Event and Dependency Injection tools, and Doctrine 2. Reading the associated documentation/manuals may assist in understanding the code samples below.

So say you’re running a blog and you have a feature that allows users to subscribe to a specific post. Whenever someone adds a comment to the post it needs to email all subscribers. Your controller method for adding comments may look something like this:


    public function createComment($id)
    {
        $em = $this->getDoctrine()->getManager();
        // get the current comment
        $comment = $em->getRepository('ACMEBundle:Comment')->find($id);
    
        // bind the form request
        $form = $this->createForm(new Form(), $comment);
        $form->bind($request);
    
        // if form is valid then save the comment and send any subscriber emails
        if ($form->isValid()) {
            $em->persist($comment);
            $em->flush();
            $this->get('email.subscribers')->send($comment)   
        }
    }

We’re not going to get into the implementation of get(’email.subscribers’)->send(), but suffice to say it’s a service that does two things: gets a list of all subscribers to the comment’s post; then, it emails each one details of the comment. By doing the bare minimum and keeping the rest for later, we can see that the response time can be improved by delaying things that this user doesn’t care about.

How do we do that? One possible way is to create an email queue and a complimentary cli command that processes it. This solution has some draw backs however; while email sending is a frequent choice for something to push off to the background, it’s likely not the only operation that you’ll want to get out of the user’s request (web service calls, heavy database operations, etc). A much better solution is to create an event firing system that any “after the fact” logic can tap into.

Why do now what you can put off for tomorrow’s cron job?

The implementation is actually pretty simple. You’ll need a service for saving the events, an entity for persisting them, a command/dispatcher for firing saved events, and listeners to handle whatever logic is necessary for that event. Let’s take a look at the service and entity first – starting with the service.


namespace ACMEEventBundleService;
    
    use SymfonyComponentDependencyInjectionContainer;
    use ACMEEventBundleEntityEvent;
    
    class EventService
    {
        /**
         *
         * @var Container
         */
        protected $container;
    
    
        public function __construct(Container $container)
        {
            // we have to use the container due to "circular reference" exceptions
            $this->container = $container;
        }
    
    
        public function save($name, array $data=array())
        {
            $event = new Event();
            $event->setName($name);
            $event->setData($data);
            $event->setCreated(new DateTime());
            $event->setProcessed(0);
            $event->setClaimedBy('');
            $event->setError('');
    
            $em = $this->container->get('doctrine.orm.entity_manager');
            $em->persist($event);
            $em->flush($event);
        }
    }

And the Entity…


    namespace ACMEEventBundleEntity;
    use DoctrineORMMapping as ORM;
    
    /**
     * @ORMTable(name="event")
     * @ORMEntity(repositoryClass="ACMEEventBundleEntityEventRepository")
     */
    class Event
    {
        /**
         * @ORMColumn(name="id", type="integer")
         * @ORMId
         * @ORMGeneratedValue(strategy="AUTO")
         */
        private $id;
    
        /**
         * @ORMColumn(name="name", type="string", length=80)
         */
        private $name;
    
        /**
         * @ORMColumn(name="data", type="array")
         */
        private $data;
    
        /**
         * @ORMColumn(name="created", type="datetime")
         */
        private $created;
    
        /**
         * @ORMColumn(name="processed", type="integer")
         */
        private $processed;
    
        /**
         * @ORMColumn(name="claimedBy", type="string", length=80)
         */
        private $claimedBy;
    
        /**
         * @ORMColumn(name="error", type="string")
         */
        private $error;
    
        // getters and setters here...
    }

The idea here is that the controller will call the service’s save() method instead of sending the emails. Of course, you can have the controller save the event directly, but using a service helps trim down your controllers and prevent duplication of code. Let’s see what the updated controller looks like:


    public function createComment($id)
    {
        $em = $this->getDoctrine()->getManager();
        // get the current comment
        $comment = $em->getRepository('ACMEBundle:Comment')->find($id);
    
        // bind the form request
        $form = $this->createForm(new Form(), $comment);
        $form->bind($request);
    
        // if form is valid then save the comment and send any subscriber emails
        if ($form->isValid()) {
            $em->persist($comment);
            $em->flush();
            $this->get('event.service')->save('comment.created', array(
                'commentId' => $comment->getId()
            ));
        }
    }

The controller can now simply insert a record instead of sending 30 emails. This will have a very positive impact on the end user – assuming that the emails actually get sent. So let’s talk about that. Here’s a subset of the command that gets executed by a cron job (or whatever you plan on using).


    while(true) {
        # attempts to claim a single event, returns it if successful. may return multiple events.
        // note: $processId is some unique id to this process, helps prevent race conditions (see below)
        $events = $this->em->getRepository('ACMEEventBundle:Event')->claimEvent($processId);
    
        # no events to process, so break out of loop.
        if (count($events) === 0) {
            break;
        }
    
        # iterate over each event to be processed, typically just 1.
        foreach($events as $eventEntity) {
            $output->write("Processing id: {$eventEntity->getId()}" . PHP_EOL);
    
            # create the event...
            $event = new Event($eventEntity);
    
            try {
                # dispatch the event!
                $this->dispatcher->dispatch($eventEntity->getName(), $event);
                # if we made it here we were successful, mark as processed
                $eventEntity->setProcessed(1);
    
            } catch (Exception $e) {
                $eventEntity->setError((string)$e);
            }
    
            $this->em->persist($eventEntity);
            $this->em->flush();
        }
    }

The command is “claiming” a single event, dispatching it with Symfony’s standard eventing system, marking it as processed, then moving to the next event in line. If an exception gets thrown we record for later troubleshooting (or when someone asks for it). You would probably have a few jobs that monitor the queue for anything fishy: records with error columns that != ”, records that have not been processed for a long time, etc.

We’re not showing the actual Event class, nor the Listener that sends the email. The former is out of brevity (check out Symfony’s docs on the Event class and you’ll be tip-top) and the latter is because that code is app specific. If you want an example check out – you guessed it – Symfony docs.

Before we move on, let’s take a look at that claimEvent() method that gets called in the ProcessCommand class above.


    ...
        protected function processEvent(InputInterface $input, OutputInterface $output)
        {
            # attempts to claim a single event, returns it if successful. may return multiple events.
            $events = $this->em->getRepository('ACMEEventBundle:Event')->claimEvent($this->processId);
            $eventCount = count($events);
    ...

If you’re curious about that repository method – and you should be – here it is


    public function claimEvent($processId) {
    
        $query = 'UPDATE event SET claimedBy = :processId '.
             'WHERE claimedBy IS NULL ORDER BY created ASC LIMIT 1';
    
        $this->getEntityManager()->getConnection()->executeUpdate($query, array(
            'processId' => $processId
        ));
    
        return $this->createQueryBuilder('aeq')
            ->where('aeq.claimedBy = :processId')
            ->andWhere('aeq.processed = 0')
            ->orderBy('aeq.created')
            ->setParameter('processId', $processId)
            ->getQuery()->getResult();
    }

Let’s take a closer look at that first query…


    $query = 'UPDATE event SET claimedBy = :processId '.
        "WHERE claimedBy = '' ORDER BY created ASC LIMIT 1";

We want to grab an event to process, but we have to make sure that we don’t get an event that has already been claimed. Likewise, we want to ensure that nobody else claims an event that we are about to process. This is a classic race condition, and naturally, good procrastinators will avoid that. We do that by setting the claimedBy column to the current processId, but only if the record currently has an empty value for that column. This should prevent the event from being claimed by multiple processes. We also order by the created column so the queue operates in FIFO (First In First Out), but you can switch to “ORDER BY created DESC” if you prefer FILO (First In Last Out). Note: we use a raw MySQL query here because Doctrine doesn’t have support for – and this is probably a good thing – a LIMIT clause in an UPDATE statement.

Next let’s look at the SELECT query.


    return $this->createQueryBuilder('e')
        ->where('e.claimedBy = :processId')
        ->andWhere('e.processed = 0')
        ->andWhere("e.error = ''")
        ->orderBy('e.created')
        ->setParameter('processId', $processId)
        ->getQuery()->getResult();

This query will return any events that are claimed by this command, but have not yet been processed. In theory this going to be the record we just updated in the previous statement, but we do add support for multiple events being returned just in the off chance that the scenario occurs.

In Conclusion:

Before wrapping it up I feel obliged to give the “when you got a hammer, everything’s a nail” warning. Just because you CAN throw some logic in a background process, doesn’t mean that you always should. In fact I say that things should never be pushed to a background process unless you have no other choice. When you move logic into the background it adds “stuff” around it that should be avoided as much as possible (less efficient, more things to break, can be harder to troubleshoot), so if you have some logic that’s a bit slow, as an example, try to rework it before moving it to the background. Else you’ve taken The Procrastination Principle too far.

Finally, if you’re interested in the above concepts but insist on bringing a sword to a knife fight then checkout Gearman and Supervisord.

Thanks for reading, if you have questions please comment below.

HTML5 vs Native for Web Sites and Applications

Go native or go HTML5?

HTML5 and responsive design are promising tools in the hands of talented web designers and developers seeking to “write once, run anywhere.” But can they deliver? How do you choose between the mobile web, hybrid mobile apps, and native for your next project? The answers are not simple, as we deal with constantly moving targets in terms of software and hardware capabilities and user’s expectations. Prepare to spend some time researching the current state of affairs and debating internally the best path for your particular situation. This article will hopefully give you a head start.

What’s the difference?

  • Mobile web app: an application developed using standard web technologies such as HTML5, JavaScript and CSS and accessed in a browser on your mobile device. Example: maps.google.com, viewed in a mobile browser
  • Native app: an application developed using platform specific technologies that runs on a particular operating system. Example: Instagram
  • Hybrid mobile app: an application developed using a cross platform framework and standard web technologies that basically packages a mobile web app in a thin native wrapper. Example: Untappd

How do I decide?

There’s not an easy answer to this question. You might think, “If I use HTML5 or a hybrid approach, I can code my app once and deploy it to all platforms! And I might even be able to reuse my code for the desktop browser version of my app!” As is often the case with technology, it isn’t that easy.

To illustrate the difficulties in choosing between these options for mobile apps, let’s talk a little bit about infinite scroll. This user interface paradigm – a page that doesn’t seem to have an end but keeps on loading new data as you scroll – shows up everywhere these days, from websites like Pinterest to many iOS and Android apps, including Facebook and LinkedIn.

I’ll leave it up to you whether this is a good UI paradigm. But I don’t think it’s a stretch to say that if you have an app, with a list, that’s used on touch devices, your users will expect to be able to swipe this list up and see more content.

The problem is, this is difficult to implement on phones and tablets, where memory tends to be the main performance bottleneck. If you develop a native app, there will be a component that allows you to easily create a long, smooth scrolling, infinite list that performs well (e.g. UITableViewController on iOS). But if you’re using HTML5 or a hybrid mobile app, you’re going to have to do more work. And a lot of testing. And even if you succeed for awhile with HTML5, you might decide to go back to native (Facebook did).

I would recommend you read about how two of the major players in this space have dealt with this issue recently:

Conclusion and more resources

You’ll need to spend some time evaluating your app before you decide what route to take between mobile web apps, native apps, or hybrid web apps. Who is your target market? Are you using capabilities of a mobile device such as geolocation, camera, accelerometer? Do you have certain screens that might be problematic unless they are developed with native APIs? Learn from others’ experiences, do some preliminary testing, and go for it. Good luck!

Is Responsive Design the Future?

Consider the Future

You are embarking upon a website redesign for your business or organization, and you want to invest your time and money wisely.

You know that next year there will be a slew of snazzy new devices that people are using to access the Internet, and you don’t want your website to suddenly be obsolete just because Google Glass is finally on the market.

Good news: you are already ahead of the curve in considering how future-proof your website will be.

This is not a new problem, by the way. As long as there has been an Internet, there has been a wide variety of ever changing devices (hardware) and browsers (software) used to access it. And for just as long, the smartest people in our industry have advocated techniques that embrace this essential nature of the web.

But the issue has more urgency now, as mobile devices and tablets have penetrated the market to such a degree that some are heralding the end of the PC. One of our clients in higher education is regularly handing out iPads to students and staff alike. If you are creating a web site or web application, making sure it looks great and works well on phones and tablets is a sine qua non.

What is Responsive Design?

So how do you get there? The basics remain the same: start with content first, and make sure you and the people you are working with are designing with web standards and using best practices such as progressive enhancement. Then take it to the next level using responsive design.

Responsive design is just what it sounds like: your website responds well to different situations. For most of your viewers, this is dominated by differences in screen size, pixel density (eg retina displays), and orientation (landscape or portrait). Viewing a web site on a desktop computer with a large monitor is a different experience than viewing it on an iPad, or an iPhone, or an android tablet, or a kindle, or a blackberry, etc, etc, ad infinitum.

How Does Responsive Design Work?

Responsive design utilizes three main techniques:

  1. Fluid Grids: a common example would be a two column layout that collapses to one column on smaller devices.
  2. Flexible Images: not only changing the size of an image as the layout changes, but also providing the correct resolution image for the device and making sure you aren’t forcing phone users on cell networks to download images that are sized for the desktop.
  3. Media Queries: attributes of the viewing device (e.g. maximum device width) that allow styles to transition between different states

Here’s an easy way to see how responsive a website is. Pull up your favorite site in a browser on your desktop computer. Maximize your browser window. Now start to scale down the size of your browser. You’re approaching tablet territory. What happens to the site? Does it break? Is it readable? Continue to scale the browser window down horizontally until you can’t go any further. You’re in cell phone territory. How does it look now? Is it usable at all or do you have to scroll to see anything?

The site you’re on right now has been designed with responsive principles, so you can do that demo if you’re on a desktop computer. If you’re on a phone or tablet, you should be having an enjoyable reading experience that feels like it was meant for your device. This is the main goal of responsive design.

Get Moving, Responsively

There are a lot of other considerations that come into play when you are making decisions about how to design or redesign your site or application. There are even use cases where separate sites for specific devices might make sense. But it’s hard to go wrong with web standards and responsive design, which are not only the quickest solutions to building a site that can be comfortably viewed by anyone, but also the most future proof.

Further Resources

Learning Responsive Design

Responsive Design Templates

We love making an impact!

To request a call, fill out the form below and an Informulate team member will reach out to discuss your project needs.

We are ready to help you succeed!

To request a call, fill out the form below and an Informulate team member will reach out to discuss your project needs.

Business Intelligence

To learn more about our Business Intelligence solutions, fill out the form below and an Informulate team member will reach out to discuss your project needs.

Code Review

To learn more about our Code Review, fill out the form below and an Informulate team member will reach out to discuss your project needs.

Process Audit

To learn more about our Process Audit, fill out the form below and an Informulate team member will reach out to discuss your project needs.

AI Workflow Review

To learn more about our AI workflow Review, fill out the form below and an Informulate team member will reach out to discuss your project needs.

Digital Transformation

To learn more about our Digital Transformation solutions, fill out the form below and an Informulate team member will reach out to discuss your project needs.

Product Development

To learn more about our Product Development solutions, fill out the form below and an Informulate team member will reach out to discuss your project needs.

Team Enrichment

To learn more about our Team Enrichment solutions, fill out the form below and an Informulate team member will reach out to discuss your project needs.

Process Automation

To learn more about our Process Automation solutions, fill out the form below and an Informulate team member will reach out to discuss your project needs.

We love building things!

To request a call, fill out the form below and an Informulate team member will reach out to discuss your project needs.

AI Consulting

To learn more about our AI Consulting solutions, fill out the form below and an Informulate team member will reach out to discuss your project needs.

We love making an impact!

To request a call, fill out the form below and an Informulate team member will reach out to discuss your project needs.

We love making an impact!

To request a call, fill out the form below and an Informulate team member will reach out to discuss your project needs.

We love making an impact!

To request a call, fill out the form below and an Informulate team member will reach out to discuss your project needs.

Digital Transformation

To learn more about our Digital Transformation solutions, fill out the form below and an Informulate team member will reach out to discuss your project needs.

We love making an impact!

To request a call, fill out the form below and an Informulate team member will reach out to discuss your project needs.

Submit Your Resume

To apply for this position, fill out the form below and attach your resume in PDF format. A member of our HR team will review your submission and reply accordingly.