Design Haven What's New?

After Editorially: The Search For Alternative Collaborative Online Writing Tools 0

After Editorially: The Search For Alternative Collaborative Online Writing Tools

I’m going to let you in on a little secret: the best writers, be it your favorite authors or those that write for Smashing Magazine, don’t do it alone. Often, they work with an editor (or two), who will help them coalesce their words into something more compelling or easier to understand.

After Editorially: The Search For Alternative Collaborative Online Writing Tools

Having worked with several editors — and having been a technical editor myself — I’ve really come to appreciate this aspect of the writing process. Refinement is an essential aspect of any creative process. As refactoring code can make a program more logical and efficient, editing a text can allow an underlying idea to be more clearly stated, or make a piece more enjoyable to read.

The post After Editorially: The Search For Alternative Collaborative Online Writing Tools appeared first on Smashing Magazine.

What You Need To Know About WordPress 3.9 0

What You Need To Know About WordPress 3.9

The latest version of WordPress named “Smith” was released yesterday which brings us another round of core changes. This time, the team worked mainly on the back-end editing and admin functions, such as a big TinyMCE (visual editor) update, gallery previews, media playlists, an improved widget UI and live theme previews (only to mention a few). Here’s what you need to know about the major changes in WordPress 3.9.

What You Need To Know About WordPress 3.9

While the old widget interface set the standard for drag and drop UI when it was introduced, it was time for an overhaul. The developer team took the Widget Customizer plugin and essentially built it into the core.

The post What You Need To Know About WordPress 3.9 appeared first on Smashing Magazine.

Understanding CSS Timing Functions 0

Understanding CSS Timing Functions

People of the world, strap yourself in and hold on tight, for you are about to experience truly hair-raising excitement as you get to grips with the intricacies of the hugely interesting CSS timing function!

Understanding CSS Timing Functions

OK, so the subject matter of this article probably hasn’t sent your blood racing, but all jokes aside, the timing function is a bit of a hidden gem when it comes to CSS animation, and you could well be surprised by just how much you can do with it.

The post Understanding CSS Timing Functions appeared first on Smashing Magazine.

Why You Should Get Excited About Emotional Branding 0

Why You Should Get Excited About Emotional Branding

Globalization, low-cost technologies and saturated markets are making products and services interchangeable and barely distinguishable. As a result, today’s brands must go beyond face value and tap into consumers’ deepest subconscious emotions to win the marketplace.

Why You Should Get Excited About Emotional Branding

In recent decades, the economic base has shifted from production to consumption, from needs to wants, from objective to subjective. We’re moving away from the functional and technical characteristics of the industrial era, into a time when consumers are making buying decisions based on how they feel about a company and its offer.

The post Why You Should Get Excited About Emotional Branding appeared first on Smashing Magazine.

Why You Should Get Excited About Emotional Branding 0

Why You Should Get Excited About Emotional Branding

Globalization, low-cost technologies and saturated markets are making products and services interchangeable and barely distinguishable. As a result, today’s brands must go beyond face value and tap into consumers’ deepest subconscious emotions to win the marketplace.

The Role Of Brands Is Changing

In recent decades, the economic base has shifted from production to consumption, from needs to wants, from objective to subjective. We’re moving away from the functional and technical characteristics of the industrial era, into a time when consumers are making buying decisions based on how they feel about a company and its offer.

BusinessWeek captured the evolution of branding back in 2001:

“A strong brand acts as an ambassador when companies enter new markets or offer new products. It also shapes corporate strategy, helping to define which initiatives fit within the brand concept and which do not. That’s why the companies that once measured their worth strictly in terms of tangibles such as factories, inventory and cash have realized that a vibrant brand, with its implicit promise of quality, is an equally important asset.”

I’d take it a step further and suggest that the brand is not just an important part of the business — it is the business. As Dale Carnegie says:

“When dealing with people, let us remember we are not dealing with creatures of logic. We are dealing with creatures of emotion.”

It’s Time To Get Emotional

In a borderless world where people are increasingly doing their research and purchases online (75% of Americans admit to doing so while on the toilet), companies that don’t take their branding seriously face imminent demise.

Enter emotional branding. It’s a highly effective way to cause reaction, sentiments and moods, ultimately forming experience, connection and loyalty with a company or product on an irrational level. That’s the ironic part: Most people don’t believe they can be emotionally influenced by a brand. Why? Because that’s their rational mind at work. People make decisions emotionally and then rationalize them logically. Therefore, emotional branding affects people at a hidden, subconscious level. And that’s what makes it so incredibly powerful.

Neuroscientists have recently made great strides in understanding how the human mind works. In his book Emotional Design: Why We Love (or Hate) Everyday Things, cognitive scientist Donald Norman explains how emotions guide us:

“Emotions are inseparable from and a necessary part of cognition. Everything we do, everything we think is tinged with emotion, much of it subconscious. In turn, our emotions change the way we think, and serve as constant guides to appropriate behavior, steering us away from the bad, guiding us toward the good.”

Emotions help us to rapidly choose between good and bad and to navigate in a world filled with harsh noise and unlimited options. This concept has been reinforced by multiple studies, including ones conducted by neuroscientist Antonio Damasio, who examined people who are healthy in every way except for brain injuries that have impaired their emotional systems. Due to their lack of emotional senses, these subjects could not make basic decisions on where to live, what to eat and what products they need.

Recognize your emotions at play. Rice or potatoes? Saturday or Sunday? Say hello or smile? Gray or blue? The Rolling Stones or The Beatles? Crest or Colgate? Both choices are equally valid. It just feels good or feels right — and that’s an expression of emotion.

Emotions are a necessary part of life, affecting how you feel, how you behave and how you think. Therefore, brands that effectively engage consumers in a personal dialogue on their needs are able to evoke and influence persuasive feelings such as love, attachment and happiness.

Creativity Is Critical

What does that mean to marketers? Good ideas are increasingly vital to businesses. And that’s good news for creative professionals and agencies.

A Wall Street Journal article titled “So Long, Supply and Demand” reports:

“Creativity is overtaking capital as the principal elixir of growth. And creativity, although precious, shares few of the constraints that limit the range and availability of capital and physical goods. In this new business atmosphere, ideas are money. Ideas are, in fact, a new kind of currency altogether — one that is more powerful than money. One single idea — especially if it involves a great brand concept — can change a company’s entire future.”

As Napoleon Hill says:

“First comes thought; then organization of that thought, into ideas and plans; then transformation of those plans into reality. The beginning, as you will observe, is in your imagination.”

Emotional Branding In Action

Let’s look at some examples of branding and campaigns that go for the heart and, in some cases, hit the mark.

WestJet Christmas Miracle

WestJet Airlines pulled on heartstrings this past holiday season with a video of Santa distributing Christmas gifts to 250 unsuspecting passengers. The Canadian airline expected around 800,000 views but blew their competitors’ campaigns out of the air with more than 35 million views.

How the WestJetters helped Santa spread some Christmas magic to their guests. (Watch on Youtube)

Coca-Cola Security Cameras

While surveillance cameras are known for catching burglaries and brawls, a Coca-Cola ad released during the latest Super Bowl encourages us to look at life differently by sharing happy, moving moments captured on security cameras. You’ll witness people sneaking kisses, dancing and random acts of kindness.

All the small acts of kindness, bravery and love that take place around us, recorded by security cameras. (Watch on Youtube)

Homeless Veteran Time-Lapse Transformation

Degage Ministries, a charity that works with veterans, launched a video showing a homeless US Army veteran, Jim Wolf, getting a haircut and new clothes as part of an effort to transform his life. Degage Ministries told Webcopyplus that Wolf has completed rehab and is turning his life around, and that the video has so far raised more than $125,000, along with increased awareness of and compassion for veterans across the country.

A video of a homeless veteran named Jim, who volunteered to go through a physical transformation in September 2013. (Watch on Youtube)

Creating Emotional Connection

While neuroscientists have only recently made significant strides in understanding how we process information and make decisions, humans have been using a powerful communication tactic for thousands of years: storytelling. It’s a highly effective method to get messages to stick and to get people to care, act and buy.

The stories that truly engage and are shared across the Web are typically personal and contain some aspect of usefulness, sweetness, humor, inspiration or shock. Also, the brand has to be seen as authentic, not manufactured, or else credibility and loyalty will be damaged.

I discussed the Coca-Cola video with Kevin McLeod, founder and CEO of Yardstick Services, who suggests that most brands merely try to connect the emotions of a real moment in life to their brand.

“The Coke video is full of wonderful clips of people doing things that make us all feel good. I’m not going to lie, it got my attention and is very memorable. At the same time, I’m intelligent enough to see what Coke is doing. With the exception of the last clip, none of the “good things” in the video are related to Coca-Cola.

The ad primes us by making us feel good and then drops the brand at the end so that we connect those emotions to the Coke brand. It’s very shrewd. Part of me thinks it’s brilliant. The other part of me thinks it’s overly manipulative and beguiles a product that can’t stand on its own merits, of which caramel-colored, carbonated sugar water has few.”

McLeod puts forth sharp views about Coke merely stamping its brand on a video compilation, which could very well have been IBM, Starbucks or virtually any other company. However, while he consciously found the video to manufacture emotions, he still enjoyed it, stating that it makes us all — including him — “feel good.” So, despite McLeod’s skepticism and resistance, it still made an emotional connection with him. There’s the desired association: Coke = feeling good.

Folks make decisions emotionally and then rationalize them logically, therefore, emotional branding affects people at a hidden, subconscious level.

To get the most success in creating an emotional connection with people, stories should explore both brand mystique and brand experience, and the actual product or service should be integrated. A brilliant example is The Lego Movie, released by Warner Bros earlier this year. The Lego brand delivered a masterful story, using its products as the stars. The brand got families and kids around the globe to shovel out well over $200 million for what could be the ultimate toy commercial.

Designers, developers, copywriters and marketers in general should take a page from moviemakers, including the late writer, director and producer Sidney Lumet. He gave the following advice on making movies: “What is the movie about? What did you see? What was your intention? Ideally, if we do this well, what do you hope the audience will feel, think, sense? In what mood do you want them to leave the theater?” The same could be asked when you’re developing a brand story: What do you want the audience to feel?

Even product placement, where everything from sneakers to cars gets flashed on the screen, has evolved into “branded entertainment.” Now, products are worked into scripts, sometimes with actual roles. A well-known example is in the film Cast Away, in which Wilson, a volleyball named after the brand, serves as Tom Hanks’ personified friend and sole companion for four years on a deserted island. When Wilson gets swept away into the ocean and slowly disappears, sad music ensues, and many moviegoers shed tears over… well, a volleyball.

Making Brands Emotional

Connecting people to products and services is not an easy task. It takes careful consideration and planning. US marketing agency JB Chicago found success sparking an emotional connection for Vitalicious, its client in the pizza industry. Its VitaPizza product had fewer calories than any competitor’s, however, its message was getting lost among millions of other messages. Explains Steve Gaither, President of JB Chicago:

We needed to bring that differentiation front and center, letting the target audience, women 25-plus interested in healthy living, know they can eat the pizza they love and miss without consuming tons of calories.

A relationship concept was formed, and a campaign was soon launched with the following key messages: “You used to love pizza. And then the love affair ended. You’ve changed. And, thankfully, pizza has too! Now you and your pizza can be together again.” The agency then tested different ads, each centered on one of the following themes:

  • sweepstakes,
  • 190 calories,
  • gluten-free/natural,
  • “You and pizza. Reunited. Reunited and it tastes so good.”

The brand idea outperformed the other ads by a margin of three to one. Bringing a story into the equation resonated with the target audience.

Gaither also shared insight on a current story-building project for StudyStars, an online tutoring company whose brand wasn’t gaining traction. JB Chicago overhauled the brand and created a story to demonstrate that StudyStars is a skills-based tutoring system with a deep, fundamental approach to learning, one that ultimately delivers better outcomes.

“We needed to find and build camp at a place where skills-based tutoring intersects with the unmet needs of the buyer. We needed a powerful brand idea that enables us to claim and defend that space. And we needed to express that idea in a manner that is believable and differentiated.”

Seeking a concept that would look, feel, speak and behave differently, JB Chicago crafted the brand idea “Master the Fundamentals.” It suggests that learning is like anything else: You have to walk before you can run, or else you will fall. So, the agency is setting up a campaign, including a video, to show that students who fall behind in school due to weak learning of the fundamentals don’t just fall behind in the classroom — their struggles affect every other aspect of their lives.

Here’s a snippet of the drafted script:

Title: Pauline’s Story

We see a beautiful little girl in a classroom. Pauline. She is 8 years old. We can also see that she’s a little lost.

A quick shot of the teacher at the chalkboard, teaching simple multiplication, like 9 × 6. Back to Pauline. She’s not getting it.

We see Pauline again at age 12, again in class. She is looking at a math quiz. It’s been graded. She got a D.

There’s a sign hanging from her neck. The sign says “I never learned multiplication.”

We see Pauline again, now at 15. She is home. Her parents are screaming at each other about her poor academic performance. The sign around her neck is still there. “I never learned multiplication.”

We see a young waitress in a dreary coffee shop. It takes us a few seconds to realize that it’s Pauline, age 18. She is tallying a customer’s check.

A close shot of the check. Pauline is trying to calculate the tax. She can’t do it, so she consults a cheat sheet posted nearby. She’s still wearing the sign. “I never learned multiplication.”

She figures the tax out and brings the check over to an attractive collegiate-looking couple, who thank her and head for the door. She watches them leave.

Their life is everything hers is not. Their future is everything hers will never be. Slate (text) states StudyStars’ case, and the video ends with an invitation to visit

JB Chicago created a story that draws us in and links to emotions — possibly hope, fear, promise, hope, security and other feelings — according to the person’s mindset, experience, circumstance and other factors. The key is that it gets to our hearts.

Emotional Triggers

Different visitors connect to and invest in products and services for different reasons. To help you strike an emotional chord with your audience, veteran marketer Barry Feig has carved out 16 hot buttons in Hot Button Marketing: Push the Emotional Buttons That Get People to Buy:

  • Desire for control
  • I’m better than you
  • Excitement of discovery
  • Revaluing
  • Family values
  • Desire to belong
  • Fun is its own reward
  • Poverty of time
  • Desire to get the best
  • Self-achievement
  • Sex, love, romance
  • Nurturing response
  • Reinventing oneself
  • Make me smarter
  • Power, dominance and influence
  • Wish-fulfillment

How Does It Make You Feel?

As emotional aspects of brands increasingly become major drivers of choice, it would be wise for designers, content writers and other marketers to peel back customers’ deep emotional layers to identify and understand the motivations behind their behavior.

So, the next time you ask someone to review your design or content, maybe don’t ask, “What do you think?” Instead, the smarter question might be:

“How does it make you feel?”

(al, il)

© Rick Sloboda for Smashing Magazine, 2014.

A Guide To Validating Product Ideas With Quick And Simple Experiments 0

A Guide To Validating Product Ideas With Quick And Simple Experiments

You probably know by now that you should speak with customers and test your idea before building a product. What you probably don’t know is that you might be making some of the most common mistakes when running your experiments.

Mistakes include testing the wrong aspect of your business, asking the wrong questions and neglecting to define a criterion for success. This article is your guide to designing quick, effective, low-cost experiments.

A Product With No Users After 180 Days

Four years ago, I had a great idea. What if there was a more visual way to learn about our world? Instead of typing search queries into a text field, what if we could share visual queries with our social network and get information about what we’re looking at? It could change the way people search! I went out, found my technical cofounder, and we started building a photo Q&A app. Being a designer, I naturally thought that the branding and user experience would make the product successful.

Six months later, after rigorous usability testing and refinement of the experience, we launched. Everyone flocked to the app and it blew up overnight. Just kidding. No one cared. It was that devastating moment after a great unveiling when all you hear are crickets chirping.

Confused and frustrated, I went back to the drawing board to determine why this was. My cofounder and I parted ways, and, left without technical expertise, I decided to step out and do some research by interviewing potential users of the app. After a few interviews, the root cause of the failed launched finally dawned on me. My beautifully designed solution did not solve a real human need. It took five days of interviews before I finally accepted this truth and slowly let go.

The good news for you is that you don’t need to go through the same pain and waste of time. I recently started working on another startup idea. This time, I followed a structured process to identify key risks and integrate customer feedback early on.

A Product With 16 Paying Customers After 24 Hours

I work with many entrepreneurs to help them build their companies, and they always ask me for feedback on the user experience of their Web and mobile apps. They express frustration with finding UX talent and want some quick advice on their products in the meantime. This happens so frequently that I decided to learn more about their difficulties and see what might solve their problems.

I specified what I was trying to learn by forming a hypothesis:

“Bootstrapped startup founders have trouble getting UX feedback because they have no reliable sources to turn to.”

To test this, I set a minimum criterion for what I would accept as validation to further explore this opportunity. I had enough confidence that this was a big problem to set a criterion as high as 6 out of 10. This means that 6 out of the 10 people I interviewed needed to indicate that this was a big enough problem in order for me to proceed.

By stating my beliefs up front, I held myself accountable to the results and reduced the influence of any retroactive biases. I knew that if these entrepreneurs already had reliable sources to turn to for UX feedback and that they were happy with them, then generating demand for an alternative solution would be much harder.

Design of my first experiment on the Experiment Board.
Design of my first experiment on the Experiment Board. (Large preview)

In three hours of interviews, I was able to validate the pain point and the need for a better alternative. (You can watch a video walkthrough of my findings.) My next step was to test the riskiest assumption related to the solution that would solve this problem. Would they pay for an online service to get feedback from UX designers?

Instead of building a functioning marketplace with designer portfolios and payment functionality, or even wireframing anything, I simply set up a landing page with a price tag in the call to action to test whether visitors were willing to pay. This is called a pitch experiment. You can use QuickMVP to set up this kind of test.

Test if customers will pay for the service.
Test if customers will pay for the service. (Large preview)

Behind the landing page, I attached a form to gather information on what they needed help with. Within a few hours, 10 people had paid for the service, asking for UX feedback on their websites. Having validated the demand, I needed to fulfill my promise by delivering the service to the people who had paid.

Did not build any functionality; just a form to collect information.
Did not build any functionality; just a form to collect information. (Large preview)

Because this market is two-sided — with entrepreneurs and designers — I tested the demand side first to see whether the solution provided enough value to elicit payment. Then, I tested the supply side to learn what kind of work designers were looking for and whether they were willing to consult virtually.

Test the second side of the market: the supply side.
Test the second side of the market: the supply side. (Large preview)

To my surprise, the UX designers I spoke with had established clientele and were very picky about new clients, let alone wanting to consult with random startups online. But they all mentioned that they had been eager to take on any work when they were first starting out and looking for clients. So, I switched my focus to UX designers who are not yet established and are open to honing their skills by giving feedback online.

Armed with these insights, I iterated on my landing page to accommodate both sides of the market and proceeded to fulfill the demands of the customers I had accumulated. No wireframes. No code. Just a landing page and two forms!

A landing page that tests both sides of the market, simultaneously.
A landing page that tests both sides of the market, simultaneously. (Large preview)

Interest in this service can also be measured by their willingness to fill out the form.
Interest in this service can also be measured by their willingness to fill out the form. (Large preview)

To simulate the back-end functionality, I emailed the requests to the UX designers, who would then respond with their feedback, which I would email back to the startup founders. Each transaction would take five minutes, and I did this over and over again with each customer until I could no longer handle the demand.

Do things that don’t scale in order to acquire your earliest customers and to identify business risks that you might overlook when implementing the technical aspects. This is called a concierge experiment. Once the manual labor has scaled to its limit, then write the code to open the bottleneck. Through this process, I was able to collect feedback on use cases, user expectations and ideas for improvements. This focused approach allowed for more informed iterations in a shorter span of time, without getting lost in wireframing much of the application up front.

Today, is a service through which startup founders connect with UX designers for feedback on their websites and apps, and designers get paid for their expertise and feedback.

How To Create A Product That People Want

What did I do differently? The structured process of testing my assumptions saved me time and confirmed that each part of my work was actually creating value for end users. Below is a breakdown of the steps I took, so that you can do the same.

Should You Build Your Idea?

My first mistake with my first startup was assuming that others had the same problem that I experienced. This is a common assumption that many gloss over. Build products that scratch your own itch, right? But many entrepreneurs realize too late that the problems they’re trying to solve are not painful enough to sustain a business.

As product people, we often have many ideas bubbling in our heads. Before getting too excited, test them and decide which one is the most viable to pursue.

Design An Effective Experiment

To get started, break down your idea into testable elements. An effective experiment must clearly define these four elements:

  1. hypothesis,
  2. riskiest assumption,
  3. method,
  4. minimum criterion for success.

At Lean Startup Machine, we’ve created a tool called an Experiment Board, which enables us to easily turn crazy ideas into effective experiments in a few minutes. As you go along, use the board as a framework to design your experiment and track progress. Refer to the templates provided on the board to quickly formulate your hypothesis, riskiest assumption, method and success criterion. You can also watch my video tutorial for more information on designing effective experiments.

Construct a Hypothesis

Every experiment starts with a hypothesis. Start by forming a customer-problem hypothesis. Once it is validated, you can go on to form a problem-solution hypothesis.

  1. Define your customer.
    Which customer segment experiences the most pain? They are your early adopters, and you should target them first. These are the people who have the problem you’re solving for, and they know they have the problem and are trying to solve it themselves and are dying for a better way! Most people have trouble identifying these customers. If you do, too, then just segment your potential customer base by level of pain and differentiating characteristics, such as lifestyle and environmental factors. Being specific will reduce the time it takes to run through experiment cycles; once you’ve tested against that one segment and found that the problem doesn’t resonate with them, you can quickly pivot to test another customer segment. In the long run, having a clear idea of who you’re building for will help you maintain a laser focus on what to prioritize and what to dismiss as noise.
  2. Define the problem.
    What problem do you believe you are solving for? Phrase it from your customer’s perspective. Too often, people phrase this from the perspective of their own lofty vision (“The Web needs to be more human”) or from a business point of view (“Customers don’t use our service enough”). Also, avoid being too broad (“People don’t recycle”). These mistakes will make your hypothesis hard to test with a specific customer, and you’ll find yourself testing for a sentiment or an opinion in interviews, rather than a solvable problem. If you have trouble, phrase the problem as if your friend was describing it you.
  3. Form a hypothesis.
    Brainstorm on a few customers and problems to consider all the possibilities. Then, combine the customer and problem that you want to focus on with this sentence: “I believe customer x has a problem achieving goal y.” You have just formed a testable hypothesis!

Identify Your Riskiest Assumption

Now that you have formed a customer-problem hypothesis, poke some holes and extract the riskiest assumption to be tested. Start by brainstorming on a few core assumptions. These are the assumptions that are central to the viability of your hypothesis or business. Think of an assumption as the behavior, mentality or action that needs to be true in order to validate the hypothesis.

Ask your team members, boss or friends to suggest any assumptions that you may have overlooked. After listing a few, identify the riskiest one. This is the core assumption that you are most uncertain about and have the least amount of data on. Testing the riskiest assumption first will speed up the experiment cycle. If the riskiest assumption is invalidated, then the hypothesis will be invalid, and you will have saved your company from going down the wrong path.

Choose a Method

After identifying the most critical aspect of your idea to test, determine how to test it. You could conduct three kinds of experiments before getting into wireframes. It’s best to start by gathering information firsthand through exploration. But you could choose a different method depending on your level of certainty and the data.

  1. Exploration
    Conduct qualitative interviews to verify and deepen your understanding of the problem. Even though you experience the problem yourself, you don’t know how big it is or who else has it. By conducting exploratory interviews first, you might realize that the opportunity isn’t as big as you had thought or that a bigger problem could be solved instead.
  2. Pitch
    Make sure the solution would actually provide value by selling the concept to customers before building the product. This will measure their level of determination to solve the problem for themselves. A potential customer not taking a certain action to use your service, like paying a small deposit or submitting an email address, indicates that the problem is not painful enough or that you haven’t found the right solution.
  3. Concierge
    Personally deliver the service to customers to test how satisfied they are with your solution. Did your value proposition meet their expectations? What was useful for them? What could have been done better? How likely are they to return or recommend the service to a friend? These are all insights you can discover in this step.

Set a Minimum Criterion for Success

Before running the experiment, decide up front what result will constitute success and what result will constitute failure. The minimum criterion for success is the weakest outcome you will accept to continue allocating resources and pursuing the solution. Factors like budget, opportunity cost, size of market, level of demand and business metrics all play into it.

The criterion is usually expressed as a fraction:

“I expect x number of people out of the y number of people in the experiment to exhibit behavior z.”

I like to set the criterion according to how big I think the problem is for that customer segment, and then determine how much revenue it would have to generate in order for me to keep working on it. At this point, statistical significance is not important; if your target customer segment is very specific, then testing with 10 of them is enough to start seeing a pattern.

Once you have validated the hypothesis with a small sample of the customer segment, then you can scale up the experiments to test with larger sample sizes or with other segments.

Run the Experiment

Once you have defined these elements, you are ready to run the experiment! Have team members look at your Experiment Board and confirm whether they agree with what you’re testing. This will hold you and the team accountable to the results, so that there are no subjective arguments afterwards.

Analyze the Results and Decide on Next Steps

After gathering data from your target customers, document the results and your learning on the Experiment Board. Did the results meet your criterion for success? If so, then your hypothesis was valid, and you can move forward to test the next risk with the product. If not, then you need to form a new hypothesis based on your learning to get closer to something that holds true. Track your progress over time on the Experiment Board to get a holistic picture of your validated learning and to continually make informed decisions.

Test and repeat. You’re on your way to creating a great product that people want!

(al, il)

© Grace Ng for Smashing Magazine, 2014.

A Guide To Validating Product Ideas With Quick And Simple Experiments 0

A Guide To Validating Product Ideas With Quick And Simple Experiments

You probably know by now that you should speak with customers and test your idea before building a product. What you probably don’t know is that you might be making some of the most common mistakes when running your experiments.

A Guide To Validating Product Ideas With Quick And Simple Experiments

Mistakes include testing the wrong aspect of your business, asking the wrong questions and neglecting to define a criterion for success. This article is your guide to designing quick, effective, low-cost experiments.

The post A Guide To Validating Product Ideas With Quick And Simple Experiments appeared first on Smashing Magazine.

Building The Web App For Unicef’s Tap Campaign: A Case Study 0

Building The Web App For Unicef’s Tap Campaign: A Case Study

Since a smartphone landed in almost everyone’s pocket, developers have been faced with the question of whether to go with a mobile website or a native app.

Native applications offer the smoothest and most feature-rich user experience in almost every case. They have direct access to the GPU, making layer compositions and pixel movements buttery-smooth. They provide native UI frameworks that end users are familiar with, and they take care of the low-level aspects of UI development that developers don’t have time to deal with.

When eschewing an app in favor of a mobile website, developers often sacrifice user experience, deep native integration and a complex UI in favor of SEO and accessibility. But now that JavaScript rendering engines are improving immensely and GPU-accelerated canvas and CSS animations are becoming widely supported, we can start to consider mobile websites a primary use case.

Unicef’s latest campaign, Tap, presented us with the challenge of combining the accessibility of a mobile website with the native capabilities, UI and overall experience that someone would expect of a native app. Our friends at Droga5 came to us with a brief to create a mobile experience that tracks how long a user avoids using their phone.

Unicef's 2014 Tap campaign
Unicef’s 2014 Tap campaign presented the challenge of combining the accessibility of a mobile website with the smooth user experience of a native app.

For every 10 minutes that a user gives up their phone, a sponsor would donate a day’s worth of water to children in the developing world. While the user patiently waits, they are presented with real-time and location-based statistics of other users who are sacrificing their precious phone time.

We’ll discuss a few of the biggest challenges here: detecting user activity, achieving performant animations, and building an API integrated with Google Analytics.

Detecting User Activity

Detecting user activity through a mobile browser was an interesting challenge and involved a lot of research, testing and normalization across all types of phones. The slightest differences and inaccuracies between phones became suddenly apparent. To explain the process, we’ll break it down into three categories: user movement, user exiting, and device-sleep prevention.

User Movement

One core piece of functionality is detecting any movement by the user. Fortunately, most mobile browsers today have access to the built-in gyroscope and accelerometer via JavaScript’s DeviceOrientation event. The unfortunate exception is devices running Android 2.3 (Gingerbread), which at the time of writing has roughly a 20% market share. In the end, the project was not worth abandoning due to one version of Android, so we pushed on. This decision proved to be even better than we thought because most devices that run version 2.3 are old, which means less memory, a slower CPU and aging hardware.

To detect movement, we first have to detect an “idle” position. We instruct the user to set their phone down, while we check the readings on the x and y axis. We start a timer with a setInterval, and if that position’s values remain within a 6° range for a few seconds, then we save those values as the device’s idle position. (If the user moves, then we restart the timer again until the phone does not move for a few seconds.) From there, we listen for the DeviceOrientation event and compare the new position’s values to the idle values. If there is a difference, then we fire off a custom user_move event.

The concept was simple to implement, but we found that most devices fluctuate by a couple of degrees when lying still. The sensitivity to movement is quite high, so we first had to determine a threshold above which we could be confident that the user has intentionally moved their device. After some trial and error, we decided on a 12° range of difference (+ and -) from the idle position, on both the x and y axis. If any movement occurs outside of that range, we assume it to be deliberate. Thus, users can bump their phone slightly with no consequence.

this.devOrientHandlerProxy = $.proxy(this.devOrientHandler, this);
window.addEventListener('deviceorientation', this.devOrientHandlerProxy, false);

MovementDetector.prototype.devOrientHandler = function(event) {
   var curr_x = Math.floor(event.beta);
   var curr_y = Math.floor(event.gamma);
   var curr_z = Math.floor(event.alpha);

   var didMove = this.calcMovement(curr_x, curr_y, curr_z, this.movement_threshold);

   if(didMove) {

MovementDetector.prototype.calcMovement = function(new_x, new_y, new_z, threshold) {
   var x_diff = Math.abs(this.x_idle_pos - new_x);
   var y_diff = Math.abs(this.y_idle_pos - new_y);
   var z_diff = Math.abs(this.z_idle_pos - new_z);
   z_diff = z_diff > 180 ? 360 - z_diff : z_diff;

   return x_diff > threshold || y_diff > threshold || z_diff > threshold;

As you can see in the first four lines of the calcMovement method, we are obtaining the difference between the idle position and the new position. Because the difference in values could be negative, we make sure to get the absolute value (Math.abs(val)). You’ll notice that the z_diff formula is a bit different. Because the value for z_diff is between 0 and 359, we have to take the absolute difference and then check to see whether the difference is above 130; if so, then we need to subtract that difference from 360.

This gives us the shortest distance between the two points. For example, if the device moves from 359 to 10, then the shortest distance would be 11. Finally, we check to see whether any of those three values (x_diff, y_diff, or z_diff) are greater than the threshold; if so, then we announce a user_move event.

Movement detection on iOS and Android
Movement detection on iOS and Android (Samsung Galaxy S3 and HTC One). (View large version)

We had to test extensively across both Android and iOS devices. iOS was straightforward, whereas we found subtle differences between Android versions and manufacturers, especially with the stock browser. Certain devices would jump dramatically between values on the z-axis. Thus, we decided not to consider any movement on the z-axis in our detection — meaning that users could slide a phone laterally on a tabletop with no consequence.

User Exiting

Another action that we wanted to detect was the user exiting the browser, to signal their intention to end the experience. We had to listen for a couple of events via the PageHide and PageVisibility API. (PageHide or PageVisibility is available in Android only in later versions — in the stock browser in version 4.3+, and in Chrome 4+. iOS 6 has PageHide, and iOS 7 has PageVisibility.)

We knew we couldn’t detect across the board, but we felt that implementing it for browsers that support it would be worthwhile. The following matrix shows which mobile browsers support PageHide and PageVisibility:

Devices PageHide event PageVisibility API
iOS 6.0 Safari
iOS 7.0 Safari
iOS 6.0 Chrome
iOS 7.0 Chrome
Android 2.3 — 4.2 stock browser
Android 4.3 stock browser
Android 4.4 stock browser
Android 4.0+ Chrome

Sleep Prevention

Keeping the device awake was the final core piece of functionality that we needed to detect user activity. This was crucial because the idea of the campaign is for users to stay away for as long as they possibly can. By default, all phones enter sleep mode after a few minutes. Some phones can be manually set to never sleep, or the user could keep it plugged in, but we could not rely on either of those options.

We had to think of interesting workarounds. iOS and Android had to be treated differently.

For iOS 6 devices, we make use of HTML5 audio and load a silent MP3 file asynchronously that loops endlessly during game play. We simply added the loop attribute, set to true, to our <source> element. For Android devices, we piggyback on what we do for iOS 6. However, Android’s display turns off after a few minutes even when an audio file is playing. Fortunately, unlike iOS, Android allows for inline video.

So, we run the same createMedia loop method shown above, but this time loading a 10-minute silent video with the <video> element, placed outside of the viewport. We found that the loop attribute doesn’t always work with inline video across Android devices, so we use HTML5’s media ended event instead. By looping a hidden video, we are able to keep Android devices from going to sleep.

Here is some sample code:

//for iOS 6
var media_type = 'audio';
var media_file = 'silence.mp3';

//for Android
var media_type = 'video';
var media_file = 'silence.mp4';

ExampleClass.prototype.createMediaLoop = function(media_type, media_file) {
   this.mediaEl = document.createElement(media_type);
   this.mediaEl.className = 'mediaLoop';
   this.mediaEl.setAttribute('preload', 'auto');

   var mediaSource = document.createElement('source');
   mediaSource.src = media_file;

   switch(media_type) {
      case 'audio':
      //create an audio element in iOS 6
      //and play a silent MP3 file
      this.mediaEl.loop = true;
      mediaSource.setAttribute('type', 'audio/mpeg');
      case 'video':
      //create a video element for Android devices
      //and play a silent video file
      mediaSource.setAttribute('type', 'video/mp4');
      var _self = this;

      this.mediaEl.addEventListener('ended', function() {
          _self.mediaEl.currentTime = 0;
      }, false);

   this.mediaEl.volume = 0;

iOS 7 is much easier. Thanks to a UI update in the browser, the address bar always remains on screen, unlike in iOS 6. So, we call an update to the browser’s URL, running the method every 20 seconds, thus preventing sleep mode.

   window.location.href = '';
}, 2e4);

We cannot use this method for iOS 6 because the user would notice the address bar slide into the view and then slide back out.


Animations are important in reinforcing the theme of water and making the experience fun. Whether we were creating a water-ripple effect, bubbles or waves, we isolated each animation and programmed different approaches to achieve the best result. Knowing that we had to do this for a slew of browsers by various manufacturers, we took the following into consideration:

  • Performance
    Do frames get dropped when testing against supported devices? How does GPU rendering compare to CPU rendering?
  • Value added
    How much does the animation really add to the experience? Could we conceivably drop it?
  • Loading size
    How much does the animation add to the website’s overall load? Does it require a library?
  • Compatibility with iOS 6+ and Android 4+
    Does it require complex fallbacks?


Let’s first look at bubbles, which animate from bottom to top. The design called for floating bubbles, whose size, focus and opacity would provide a sense of depth within the environment. We decided to test a few approaches, but these are the main two we were curious about:

  • Animating DOM elements using hardware-accelerated CSS 3-D transforms (transform: translate3d(x, y, z));
  • Rendering all circles on a 2-D canvas element.

Note: Animating via the top/left properties is not an option due to the lack of subpixel rendering and the long time to paint each frame. Paul Irish has written more about this.

We tested several approaches to find the best method of animating bubbles.
We tested several approaches to find the best method to animate the bubbles. (View demo)

We pulled off the canvas method by creating two transparent canvases: one on top of the content and one below. We create our bubbles as objects with randomized properties in memory (diameter, speed, opacity, etc.). At each frame, we clear the canvas via context.clearRect(0, 0, width, height);, and then draw each bubble to the screen. To create a floating, bubble-like movement, we need to change each bubble’s x and y values in each frame. For the y-axis, we subtract a constant value in each frame: b.y = b.y - b.speed;.

In this case, we determine a unique speed for each bubble using (Math.random() / 2) + 0.1). For the x-axis, we need a smooth repetitive oscillation, which we can achieve by taking the sine value of its frame count: b.x = b.startX + Math.sin(count / b.amplitude) * 50;. You can view the extracted code and the demo.

The DOM-based implementation using CSS 3-D transforms follows a very similar method. The only big differences are that we dynamically create and insert DIV elements at the beginning and, using Modernizr, apply vendor-prefixed translate3d(x, y, z) properties on each animation frame. You can view the extracted code and the demo.

To optimize performance, we considered a canvas implementation because GPU acceleration has been enabled for the browsers we support (iOS 5 with its Nitro JavaScript, and Chrome for Android 4+); however, we noticed severe issues with aliasing and the frame rate on Android devices.

Timeline profiles using the canvas element and CSS 3-D transforms
Timeline profiles using the canvas element and CSS 3-D transforms (View large version)

We also did some performance profiling in Chrome’s emulation mode on the desktop (better methods exist for doing more granular remote testing on a mobile device). The difference in results between the two was still interesting: A GPU-accelerated 2-D canvas showed better performance than GPU-accelerated CSS transforms, especially with a higher number of DOM elements, due to the rendering time for each one and the recalculation of styles.

We used CSS 3-D transforms to animate the bubbles.
After carefully considering several techniques, we went with CSS 3-D transforms to animate the bubbles. (View large version)

In the end, we used CSS 3-D transforms. We only need to animate 16 bubbles at a time, and the CPU and GPU on supported devices collectively seem to handle the overhead just fine. The performance and anti-aliasing issues with canvas rendering on old Android devices were the determining factors. At the time of writing and in this particular case, canvas wasn’t an option for us, but browser vendors certainly are not ignoring it, and the latest rendering engines of mobile browsers have seen massive improvements.


We use wave animations throughout both the mobile and desktop experience — specifically, as a design detail to reinforce the water theme and as a transition to wash away old content and bring in new content. As with the bubbles, we explored using both canvas and CSS-based animations. And likewise, CSS animations were the way to go. A wave PNG costs us only 7 KB, and we get much better performance from mobile browsers across the board.

As with bubbles, we explored both canvas and CSS-based animations.
As with bubbles, we explored using both canvas and CSS-based animations. (View demo)

Our isolated demo of the desktop implementation (which is similar to mobile) is really quite simple. Each wave is a background image set with background-repeat:repeat-x and a looping keyframe animation that moves left with linear easing. We make the speed of the waves in front slightly faster and the waves in the back slower to simulate depth. You can view the code, which uses Sass and Compass.

We also tried a very different vanilla JavaScript approach by creating a wave oscillation. We used a wave oscillator algorithm created by Ken Fyrstenberg Nilsen, adjusting it to suit our needs. You can view this demo, too.

We abandoned the oscillation effect because of poor performance on old Android devices.
We abandoned the oscillation effect because of poor performance on old Android devices.

The effect turned out to be really nice and organic, but the performance was lacking on old Android devices, so we abandoned the approach altogether.


During gameplay, we wanted to provide some insightful facts and location-based statistics, as well as encourage users to keep playing. We used several APIs, combining them with scores from our database.

The back end is run off of the Laravel PHP framework and a few APIs. For location-based statistics, we could have asked the user for their location via HTML5 geolocation, but we wanted a more seamless experience and didn’t want to interrupt the user with a confirmation dialog box. We don’t need a precise location, so we opted for MaxMind’s GeoIP2 service. This service gives us enough data to get the user’s rough location, which we can combine with other services and data.

We also want people to know that they are a part of a bigger community, so we wanted to provide statistics based on website analytics. The obvious choice was to use Google Analytics’ new API, as well as its newer Real Time Reporting API.

Because we have access to different kinds of data, we are able to display facts that are relevant to the user. For example, a user in the US would get a statistic on how their state compares to other states in the country, according to Google Analytics. By using Google’s Real Time Reporting API, we see how many active users are on the website, and we display that to the user, illustrating other people’s participation. In our PHP code, we use Google Analytics for Laravel 4 which works great and handles a lot of code, making it much easier to get data back from Google Analytics’ API.

$ga_realtime_metric = 'ga:activeVisitors';
$ga_service = Analytics::getService();
$optParams = array('dimensions' => $this->ga_dimensions, 'sort' => '-'. $this->ga_realtime_metric);

$results = $ga_service->data_realtime->get([google profile id], $ga_realtime_metric, $optParams);   

We also use GeoIP2’s service to record people’s times, to display the score of a particular city or state.

To prepare for spikes in traffic, to stay within each API’s rate limit (Google Analytics’ limit is 50,000 requests per project per day) and to optimize speed, we cache some data at 10-minute intervals. We cache certain other data, such as GeoIP2’s, even longer (every five days) because it doesn’t change that often.

Due to the ever-growing number of scores, the queries to retrieve certain statistics would take longer than is acceptable for each user. So, we set up a couple of CRON jobs to set these queries to run every 10 minutes, caching the updated statistics on the server.

When a user hits the website, an AJAX call to the server asks for the cached data, which is returned to the browser in a big JSON response. This increases loading times considerably and keeps us within the rate limit for each API that we use.


As mobile browsers continue to improve, offering new features and enhancing performance, new opportunities like this will arise. It’s always important to question whether you should build a native app or a Web app, and keep in mind the pros and cons of each, especially because the differences in their capabilities are narrowing rapidly.

Developing our Tap app for the Web not only was more affordable (with two Web developers working on a single code base, as opposed to a developer for each platform), but made it more accessible and easily shareable. We’ll never know, but we’re confident that we would not have reached 3.7 million website visits in the first month had we gone the native route. About 18% of those visits came from Safari in-app browsers — meaning that people had clicked on a link in their Facebook or Twitter feed and were taken directly into the experience. A native app would have seriously hampered that ability to share the experience or message quickly.

We hope this article has been helpful in illustrating both the thought process of Web versus native and the technical hurdles involved in building Tap. The project was really fun and challenging, it was for a good cause, and it introduced a unique mechanism for donating, one that we hope to see propagate and manifested in new and creative ways.

Further Ressources

(al, il, ml)

© Nick Jonas und Francis Villanueva for Smashing Magazine, 2014.

Building The Web App For Unicef’s Tap Campaign: A Case Study 0

Building The Web App For Unicef’s Tap Campaign: A Case Study

Since a smartphone landed in almost everyone’s pocket, developers have been faced with the question of whether to go with a mobile website or a native app. Native applications offer the smoothest and most feature-rich user experience in almost every case. They have direct access to the GPU, making layer compositions and pixel movements buttery-smooth.


Native applications also provide native UI frameworks that end users are familiar with, and they take care of the low-level aspects of UI development that developers don’t have time to deal with. When eschewing an app in favor of a mobile website, developers often sacrifice user experience, deep native integration and a complex UI in favor of SEO and accessibility.

The post Building The Web App For Unicef’s Tap Campaign: A Case Study appeared first on Smashing Magazine.

How To Build A Ruby Gem With Bundler, Test-Driven Development, Travis CI And Coveralls, Oh My! 0

How To Build A Ruby Gem With Bundler, Test-Driven Development, Travis CI And Coveralls, Oh My!

Ruby is a great language. It was designed to foster happiness and productivity in developers, all the while providing tools that are effective and yet focused on simplicity. One of the tools available to the Rubyist is the RubyGems package manager. It enables us both to include “gems” (i.e. packaged code) that we can reuse in our own applications and to package our own code as a gem to share with the Ruby community. We’ll be focusing on the latter in this article.

I’ve written an open-source gem named Sinderella (available on GitHub), and in this article I’ll go through all of the steps I took to write the code (including the test-driven development process) and how I prepared it for release as a gem via RubyGems. I’ll also show you how to set up your tests to run through a continuous integration (CI) server using the popular Travis CI service.

In case you’re unfamiliar with CI, it refers to the process of merging code with a central repository, with the aim of preventing integration problems down the road in a project’s life cycle. (If you use a version control system such as git and a decentralized code repository such as GitHub, then you might already be familiar with these concepts.)

Finally, I’ll show you how to use Coveralls to measure the code coverage of your tests and to obtain a statistical history of your commits.

Image credit: The Ruby and Bundler logos, along with the Travis CI mascot.

What We’ll Cover

What Does Sinderella Do?

As described in the README on GitHub, Sinderella allows the author to “pass a code block to transform a data object for a specific period of time.” So, if we provide data like the following…

{ :key => 'value' }

… then we could, for example, convert it to the following for a set period of time:

{ :key => 'VALUE' }

Once the time period has expired, the data is returned to its normal state.

Sinderella is made up of two files: the main application and a data store that holds the original and transformed data.

Later in this article, I’ll describe my development process for creating the gem, and we’ll review some of the techniques required to produce a robust and stable gem.

What We Won’t Cover

To be clear, this article is focused on creating a Ruby gem using Bundler and on following best practices, such as test-driven development and CI.

We won’t cover how to write Ruby code or how we developed the Sinderella gem. Nor will we cover how to write RSpec tests (although we will demonstrate how to set up RSpec). RSpec is a detail of implementation and can be swapped out for any testing library that you deem appropriate.

Additional Requirements

To get started, you’ll need to register for accounts with the following services:

Registering for these services is free. Travis CI is free for all open-source projects (which this will be). You may pay for a Pro account, which allows you to set up CI for your private code repositories, but that’s not needed for what we’ll be doing here.

You’ll also need to be comfortable working in the command line. You don’t have to be a Unix shell scripting wizard, but I’ll be working here exclusively in a shell environment (specifically, using the Terminal on Mac OS X) to do everything, including running shell commands, opening multiplexers (such as tmux) and editing code (with Vim).

Which Version Of Ruby To Use

Ruby has many different flavors:

  • Ruby (also known as Matz’s Ruby Interpreter) is the original language, written in C.
  • Rubinius is an implementation of Ruby that is written mainly with Ruby.
  • JRuby is an implementation of Ruby built on top of the Java Virtual Machine (JVM), with Java.

I deliberately used JRuby to implement Sinderella because part of the gem’s code relies on “threads,” and MRI doesn’t provide true threading.

JRuby provides a native thread implementation because it is built on top of the JVM. But really, using any of the above variations would have been fine.

Unfortunately, though, it’s not all clear sailing with JRuby. Quite a few gems still use C extensions (i.e. code written in C that Ruby can import). At the moment, you can enable a flag in JRuby that allows it to use C extensions, but doing so is merely a temporary solution because this option is expected to be removed from JRuby in future releases.

This could be an issue, for example, if you’re using Pry (a replacement for Ruby’s irb REPL). Pry works fine with JRuby, but you wouldn’t be able to take advantage of the equally amazing pry-plus extension, which offers many extra debugging capabilities, because some of its dependencies rely on C extensions.

I’ve worked around this limitation somewhat by using pry-nav. It’s not as good and can be a little buggy in places when used under JRuby, but it gets the job done.


To help us create the gem, we’ll use the popular Bundler gem.

Bundler is primarily designed to help you manage a project’s dependencies. If you’ve not used it before, then don’t worry because we’ll be taking advantage of a lesser known feature anyway, which is its ability to generate a gem boilerplate. (It also provides some other tools that will help us manage our gem’s packaging, which I’ll get into in more detail later on.)

Let’s begin by installing Bundler:

gem install bundler

Once Bundler is installed, we can use it to create our gem. But before doing that, let’s review some other dependencies that we’ll need.


Developing the Sinderella gem requires five dependencies. Four are needed during the development process and won’t be needed in production. The fifth is a “hard” dependency, meaning that it is needed for the Sinderella gem to function properly.

Of these dependencies, Crimp and RSpec are specific to Sinderella. So, when developing your own gem, you would likely replace them with other gems.


We need to install RubyGems in order to take advantage of the package manager and its built-in gem commands (which Bundler will wrap with its own enhancements).


RSpec is a testing framework for the Ruby programming language. We’ll cover this in more detail later on in the article.

When building your own gem, you might want to swap RSpec for a different testing tool. Another popular option is Cucumber.


Guard is a command-line tool that responds to events. We’ll be using it to more easily write code for test-driven development. It works by monitoring files that you tell it to watch and then, when it notices changes to those files, triggering some command that you specify based on the type of file that was changed.

This comes in really handy when you’re running tests in a multiplexer such as tmux or when using a terminal such as iTerm2 (which supports multiple terminal windows being open at once), because while you’re editing the code in one terminal, you can get instant feedback to breaking tests as you work on the code. This is known as a tight feedback loop (more on this later).


Pry is a replacement REPL for Ruby’s standard irb. It offers everything the standard irb does but with a lot of additional features. It’s useful for testing code to see how it works and whether the Ruby interpreter fails to run it. It’s also useful for debugging code when something doesn’t work the way you expect.

It didn’t have much of a presence in the development of Sinderella, but it is such an important tool that I felt it deserved more than a cursory mention. For example, if you’re unsure of how a particular Ruby feature works, you could test drive it in Pry.

If you want to learn more about how to use it, then watch the screencast on Pry’s home page.


Crimp is a gem released by the BBC that allows you to convert a piece of data into a MD5 hash.

Generating A Boilerplate

OK, now we’ve finally gotten to the point where we can generate the set-up files that will configure our gem file.

As mentioned, Bundler has the tools to generate the foundation of a gem so that we don’t have to type it all out by hand.

Now, open up the terminal and run the following command:

bundle gem sinderella

When that command is run, the following is generated:

❯ bundle gem sinderella
  create  sinderella/Gemfile
  create  sinderella/Rakefile
  create  sinderella/LICENSE.txt
  create  sinderella/
  create  sinderella/.gitignore
  create  sinderella/sinderella.gemspec
  create  sinderella/lib/sinderella.rb
  create  sinderella/lib/sinderella/version.rb
Initializing git repo in /path/to/Sinderella

Let’s take a moment to review what we have.

Folder Structure

Bundler has automatically created a lib directory for us, which holds a single Ruby file named after our project. The name of the directory is extracted from the name provided via the bundle gem command.

Be aware that if you specify a hyphen (-) in the gem’s name, then Bundler will create a deeper folder structure by using the hyphen as a delimiter. For example, if your command looks like bundle gem foo-bar, then the following directory structure would be created:

├── lib
│   └── foo
│       ├── bar
│       │   ├── bar.rb
│       │   └── version.rb
│       └── bar.rb

This is actually quite useful when you’re producing multiple gems that are all namespaced under a single project. For a real-world example of this, look at BBC News’ GitHub repository, which has multiple open-source gems published under the namespace alephant.


The gemspec file is used to define the particular configuration of your gem. If you weren’t using Bundler, then you would need to manually create this file (according to RubyGems’ documentation).

Below is what Bundler generates for us:

# coding: utf-8
lib = File.expand_path('../lib', __FILE__)
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
require 'sinderella/version' do |spec|          = "sinderella"
  spec.version       = Sinderella::VERSION
  spec.authors       = ["Integralist"]         = [""]
  spec.summary       = %q{TODO: Write a short summary. Required.}
  spec.description   = %q{TODO: Write a longer description. Optional.}
  spec.homepage      = ""
  spec.license       = "MIT"

  spec.files         = `git ls-files -z`.split("\x0")
  spec.executables   = spec.files.grep(%r{^bin/}) { |f| File.basename(f) }
  spec.test_files    = spec.files.grep(%r{^(test|spec|features)/})
  spec.require_paths = ["lib"]

  spec.add_development_dependency "bundler", "~> 1.5"
  spec.add_development_dependency "rake"

As you’ll see later, this is a basic outline of the final gemspec file that we’ll need to create. We’ll end up adding to this file some of the other dependencies that our gem will need to run (both development and production dependencies).

For now, note the following details:

  • $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
    This adds the lib directory to Ruby’s load path, which makes require’ing files elsewhere in the code a little cleaner.
  • require 'sinderella/version'
    This loads in a version.rb file, which was generated when Bundler constructed our boilerplate. This file serves as a way to implement semantic versioning in our gem releases. Every time we release the gem, we’ll need to update the version number; then, when we run the particular Bundler command to release the gem, it will automatically pull in the updated value to our gemspec file.
  • do |spec|
    Here, we define a new specification and include properties such as the name of the gem, the version number (see the previous point), a list of the authors of the gem and a contact email address. We can also include some descriptive text about the gem.
  • Next, we define the files to include in the gem. Any executable files found are injected dynamically into the file by looping through a bin directory (if one is found). We also dynamically inject a list of test files (which we’ll see later on when we create a spec folder to hold the tests that will ensure that the gem works as expected).
  • Finally, we define the dependencies, including both runtime and development dependencies. At the moment, there is only the latter, but soon enough we’ll have one runtime dependency to add.

The RubyGems guides has full details on the specification. You could configure a whole host of settings, but Bundler helps us by defining the essential ones.


In a typical Ruby project, you’ll find that the Gemfile is filled with a list of dependencies, which Bundler then collates and installs for you. In this instance, because we’re generating a gem and not writing a standard application, our Gemfile will actually be pretty bare, made up of two lines: one to tell Bundler where to source the gems from, and the other to inform Bundler that the dependencies are listed in the gemspec file instead.


Again, in a typical Ruby application, a Rakefile will contain many different tasks (written in Ruby) that you can execute via the command line. In this case, a one-line Rakefile has been provided that loads bundler/gem_tasks. That in turn loads additional rake commands that Bundler adds to make it easier to build and deploy your gem. We’ll see how to use these commands later.


Because we’re releasing code that could potentially be used by other developers, Bundler generates an MIT licence by default and dynamically injects the current year and your user name into it.

Feel free to either delete it or replace it with another license if the MIT one doesn’t fit your needs, although it’s pretty standard and relevant to most projects.


Lastly, Bundler has taken the tediousness out of generating a README file. It includes TODO messages wherever relevant, so that you know what needs to be manually added before the gem can be built (such as a description of the gem and a code example that shows how you expect the gem to be used). It also automatically generates installation instructions and a section on how other developers can fork your code and contribute new features and bug fixes.

One other benefit of Bundler is that it delivers a consistent code base across all gems you create. All gems will have the same structure, and the consistency across content such as the README file will make it easier for users who integrate more than one of your gems to understand them.

Test-Driven Development

Test-driven development (TDD) is the process of building code on top of supporting tests. Sinderella was developed using its principles.

The guiding steps are “red, green, refactor,” and TDD fundamentally breaks down as the following:

  1. Write a test.
  2. Run the test and watch it fail (because there is no code yet for it to pass).
  3. Write the least amount of code to pass the test (literally, hack it together).
  4. Refactor the code so that it’s cleaner and better written.
  5. If the test has failed through refactoring, then start the red, green, refactoring process again.

This is sometimes referred to as a tight feedback loop: getting quick or instant feedback on whether code is working.

By writing the tests first, you ensure that every line of code exists for a reason. This is an incredibly powerful principle and one you should recall when caught in a debate over whether TDD “sucks” or “takes too long.”

Starting a project with tests can feel daunting. But in addition to ensuring that every line of code exists for a reason, it provides an opportunity for you to properly design the APIs.


As for writing tests for Sinderella, I chose to use RSpec, which is described thus on its website:

RSpec is a testing tool for the Ruby programming language. Born under the banner of behaviour-driven development, it is designed to make test-driven development a productive and enjoyable experience

In order to use RSpec in our gem, we’ll need to update the gemspec file to include more dependencies:

spec.add_development_dependency "rspec"
spec.add_development_dependency "rspec-nc"
spec.add_development_dependency "guard"
spec.add_development_dependency "guard-rspec"
spec.add_development_dependency "pry"
spec.add_development_dependency "pry-remote"
spec.add_development_dependency "pry-nav"

As you can see, we’ve added RSpec to our list of dependencies, but we’ve also included rspec-nc, which provides native notifications on Mac OS X (rspec-nc is a nicety and not essential to produce the gem). Having notifications at the operating-system level can be quite handy, allowing you to do other things (perhaps check email) while tests run in the background.

We’ve also added (as you would expect) guard as a dependency, as well as guard-rspec, which Guard will need in order to understand how to handle RSpec-specific requests. This suite of Pry tools will debug any problems we come across and will be useful for any gems you develop in future.

RSpec Rake Tasks

Now that we’ve updated gemspec to include RSpec as a dependency, we’ll need to add an RSpec-related Rake task to our Rakefile, which will enable us (manually) or Guard to execute the task and run the RSpec test suite:

require 'rspec/core/rake_task'
require 'bundler/gem_tasks'

# Default directory to look in is `/specs`
# Run with `rake spec` do |task|
  task.rspec_opts = ['--color', '--format', 'nested']

task :default => :spec

In the updated version of Rakefile above, we are loading an additional file that is packaged with RSpec (require 'rspec/core/rake_task'). This new file adds some RSpec-related modules and classes for us to use.

Once this code has loaded, we create a new instance of the RakeTask class (created when we loaded rspec/core/rake_task) and pass it a code block to execute. The code block we pass will define the options for our RSpec test suite.

Spec Files

Now that the majority of the RSpec test suite configuration is in place, the last thing we need to do is add a test file.

Let’s create a spec directory and, inside that, create sinderella_spec.rb:

require 'spec_helper'

describe Sinderella do
  it 'does stuff' do
    pending # no code yet

You’ll see that we’ve included a temporary specification that states that the code “does stuff.” When the test suite is run, then this test will not cause any errors, even though no code has been implemented yet, because we have marked the test as “pending” (an RSpec-specific command). At this point, we’re only interested in getting a barebones set-up in place; we’ll flesh out the tests soon enough.

You may have noticed that we’re also loading another file, named spec_helper.rb. This type of file is typical in an RSpec suite and is used to load any dependencies or libraries that are required for the tests to run. The content of the spec helper file will look like this:

require 'pry'
require 'Sinderella'

All we’ve done here is load Pry (in case we need it for debugging) and the main Sinderella gem code (because this is what we want to test).

Guard And tmux

At this point, we’ve gone over the set-up and preparation of RSpec and Rake (to get our testing framework in place). We also know what Guard is and how it helps us to test the code. Now, let’s go ahead and add a Guardfile to the root directory, with the following contents:

guard 'rspec' do
  # watch /lib/ files
  watch(%r{^lib/(.+)\.rb$}) do |m|

  # watch /spec/ files
  watch(%r{^spec/(.+)\.rb$}) do |m|

This file tells Guard that we’re using RSpec to run our tests. It also defines which directories to watch for changes and what to do when it notices changes. In this case, we’re using regular expressions to match any files in the lib or spec directory and to execute the relevant RSpec command that runs our tests (or to run one specific test).

We’ll see in a minute how to actually run Guard. For now, let’s see how tmux fits this workflow.


Some developers prefer to have separate applications open (for example, a code editor such as Sublime Text and a terminal application to run tests). I prefer to use tmux to have multiple terminal shells open on one screen and to have Vim open on another screen to edit code. Thus, I can edit code and get visual feedback from the terminal about the state of the tests all on one screen. You don’t need to follow the exact same approach. As mentioned, there are other ways to get feedback, but I have found tmux and Vim to be the most suitable.

So, we have two tmux panes open, one in which Vim is running, and the other in which a terminal runs the command bundle exec guard (this is how we actually run Guard).

That command will return something like the following back to the terminal:

❯ bundle exec guard
09:53:55 - INFO - Guard is using Tmux to send notifications.
09:53:55 - INFO - Guard is using TerminalTitle to send notifications.
09:53:55 - INFO - Guard::RSpec is running
09:53:55 - INFO - Guard is now watching at '/path/to/Sinderella' 

From: /path/to/Sinderella/sinderella.gemspec @ line 1 :

 => 1: # coding: utf-8
    2: lib = File.expand_path('../lib', __FILE__)
    3: $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
    4: require 'sinderella/version'
    6: do |spec|

From this point on, you can press the Return key to run all tests at once, which will display the following message in the terminal:

09:57:41 - INFO - Run all
09:57:41 - INFO - Running all specs

This will be followed by the number of passed and failed tests and any errors that have occurred.

Continuous Integration With Travis CI

As mentioned at the beginning, continuous integration (CI) is the process of merging code with a central repository in order to prevent integration problems down the road in a project’s life cycle.

We’ll use the free Travis CI service (which you should have signed up for by now).

Upon first viewing your “Accounts” page in Travis CI, you’ll be presented with a complete list of all of your public GitHub repositories, from which you can select ones for Travis CI to monitor. Then, any time you push a commit to GitHub, Travis CI will run your tests.

Once you have selected repositories, you’ll be redirected to a GitHub “hooks” page, where you can confirm and authorize the configuration.

The Travis CI page for the Sinderella gem is where you can view the entire build history, including both passed and failed tests.


To complete the configuration, we need to add a .travis.yml file. If you’ve enabled your repository from your Travis CI account and you don’t have a .travis.yml file, then Travis CI will throw an error and complain that you need one. Let’s look at the one we’ve set up for Sinderella:

language: ruby
cache: bundler

  - jruby
  - 2.0.0

script: 'bundle exec rake'

    on_failure: change
    on_success: never

Let’s go through each property to understand what it does:

  • language: ruby
    Here, we’re telling Travis CI that the language in which we’re writing tests is Ruby.
  • cache: bundler
    This tells Travis CI that we want it to cache the gems we’ve specified. (Running Bundler can be a slow process, and if your gems are unlikely to change often, then you don’t want to keep running bundle install every time you push a commit, because we want our tests to run as quickly as possible.)
  • rvm:
    This specifies the different Ruby versions and engines that we want our tests to run against (in this case, JRuby and MRI 2.0.0).
  • script: 'bundle exec rake'
    This gives Travis CI the command it requires to run the tests.
  • notifications:
    This indicates how we want Travis CI to notify us. Here, we’re specifying an email address to receive the notifications. We’re also specifying that an email should be sent only if a failure has occurred (there’s no point in getting thousands of emails telling us that nothing is wrong).

Preventing a Test Run

If you’re committing a change that doesn’t affect your code or tests, then you don’t want to waste time watching those non-breaking changes trigger a test run on Travis CI (no matter how fasts the tests are).

The easiest way to avoid this is to add [ci skip] anywhere in your commit message. Travis CI will see this and then happily ignore the commit.

Code Coverage And Statistics With

One last service we’ll use is Coveralls, which you should have already registered for.

Coveralls works with your continuous integration server to give you test coverage history and statistics. Free for open source, pro accounts for private repos.

When you log into Coveralls for the first time, it will ask you to select repositories to monitor. It works like Travis CI, listing all of your repositories for you to enable and disable access. (You can also click a button to resynchronize the repository list, in case you’ve added a repository since last syncing).

To set up Coveralls, we need to add a file that tells Coverall what to do. For our project, we need to add a file to the root directory named .coveralls.yml, in which we’ll include a single line of configuration:

service_name: travis-ci

This tells Coveralls that we’re using Travis CI as our CI server. (If you’ve signed up for a Pro account, then use travis-pro instead.)

We also need to add the Coveralls gem to our gemspec:

spec.add_development_dependency "coveralls"

Finally, we need to include Coveralls’ code in our spec_helper.rb file:

require 'coveralls'

require 'pry'
require 'sinderella'

Notice that we have to load the code before the Sinderella code. If you load Coveralls after the application’s code has loaded, then it wouldn’t be able to hook into the application properly.

Let’s return to our TDD process.

Skeleton Specification

When following TDD, I prefer to create a skeleton of a test suite, so that I have some idea of the type of API to develop. Let’s change the contents of the sinderella_spec.rb file to have a few empty tests:

require 'spec_helper'

describe Sinderella do
  let(:data) {{ :key => 'value' }}
  let(:till_midnight) { 0 }

  describe '.transforms(data, till_midnight)' do
    it 'returns a hash of the passed data' do

    it 'stores original and transformed data' do

    it 'restores the data to its original state after set time' do

  describe '.get(id)' do
    context 'before midnight (before time expired)' do
      it 'returns transformed data' do

    context 'past midnight (after time expired)' do
      it 'returns original data' do

  describe '.midnight(id)' do
    it 'restores the data to its original state' do

Notice the pending command, which is provided by RSpec and allows the tests to run without throwing an error. (The suite will highlight pending tests that still need to be implemented so that you don’t forget about them.)

You could also use the fail command, but pending is recommended for unimplemented tests, particularly before you’ve written the code to execute them. Relish demonstrates some examples.

From here on, I follow the full TDD process and write the code from the outside in: red, green, refactor.

For the first test I wrote for Sinderella, I realized that my code needs a way to create an MD5 hash from a data object, and that’s when I reached for the BBC News’ gem, Crimp. Thus, I had to update the gemspec file to include a new runtime dependency: spec.add_runtime_dependency "crimp".

I won’t go step by step into how I TDD’ed the code because it isn’t relevant to this article. We’re focusing more on the principles of creating a gem, not on details of implementation. But you can get all of the gruesome details from the public list of commits in Sinderella’s GitHub repository.

Also, you might not even be interested in the RSpec testing framework and might be planning on using a different framework to write your gem. That’s fine. Anyway, what follows is the full Sinderella specification file (as of February 2014):


require 'spec_helper'

describe Sinderella do
  let(:data) {{ :key => 'value' }}
  let(:till_midnight) { 0 }

  def create_new_instance
    @id = subject.transforms(data, till_midnight) do |data|
      data.each do |key, value|
        data.tap { |d| d[key].upcase! }

  describe '.transforms(data, till_midnight)' do
    it 'returns a MD5 hash of the provided data' do
      expect(@id).to be_a String
      expect(@id).to eq '24e73d3a4f027ff81ed4f32c8a9b8713'

  describe '.get(id)' do
    context 'before midnight (before time expired)' do
      it 'returns the transformed data' do
        expect(subject.get(@id)).to eq({ :key => 'VALUE' })

    context 'past midnight (after time expired)' do
      it 'returns the original data' do
        Sinderella.reset_data_at @id
        expect(subject.get(@id)).to eq({ :key => 'value' })

  describe '.midnight(id)' do
    context 'before midnight (before time expired)' do
      it 'restores the data to its original state' do
        expect(subject.get(@id)).to eq({ :key => 'value' })


require 'spec_helper'

describe DataStore do
  let(:instance)    { DataStore.instance }
  let(:original)    { 'bar' }
  let(:transformed) { 'BAR' }

  before(:each) do
      :id => 'foo',
      :original => original,
      :transformed => transformed

  describe 'set(data)' do
    it 'stores original and transformed data' do
      expect(instance.get('foo')[:original]).to eq(original)
      expect(instance.get('foo')[:transformed]).to eq(transformed)

  describe 'get(id)' do
    it 'returns data object' do
      expect(instance.get('foo')).to be_a Hash
      expect(instance.get('foo').key?(:original)).to be true
      expect(instance.get('foo').key?(:transformed)).to be true

  describe 'reset(id)' do
    it 'replaces the transformed data with original data' do
      foo = instance.get('foo')
      expect(foo[:original]).to eq(foo[:transformed])

Passing Specification

Here is the output of our passed test suite:

❯ rake spec
/path/to/.rubies/jruby-1.7.9/bin/jruby -S rspec ./spec/data_store_spec.rb ./spec/sinderella_spec.rb --color --format nested

    stores original and transformed data
    returns data object
    replaces the transformed data with original data

  .transforms(data, till_midnight)
    returns a MD5 hash of the provided data
    before midnight (before time expired)
      returns the transformed data
    past midnight (after time expired)
      returns the original data
    before midnight (before time expired)
      restores the data to its original state

Finished in 0.053 seconds
7 examples, 0 failures

Design Patterns

According to Wikipedia:

A design pattern in architecture and computer science is a formal way of documenting a solution to a design problem in a particular field of expertise.

Many design patterns exist, one of which in particular is usually frowned on, the Singleton pattern.

I won’t debate the merits or problems of the Singleton design pattern, but I opted to use it in Sinderella to implement the DataStore class (which is the object that stores the original and transformed data), because what would be the point of having multiple instances of DataStore if the data is expected to be shared from a single access point?

Luckily, Ruby makes it really easy to create a Singleton. Just add include Singleton in your class definition.

Once you’ve done that, you will be able to access a single instance of your class only via an instance property — for example, MyClass.instance.some_method().

We saw the specification (or test file) for DataStore in the previous section. Below is the full implementation of DataStore:

require 'singleton'

class DataStore
  include Singleton

  def set(data)
    hash_data = {
      :original    => data[:original],
      :transformed => data[:transformed]
    }[:id], hash_data)

  def get(id)

  def reset(id)
    original  = container.fetch(id)[:original]
    hash_data = {
      :original => original,
      :transformed => original
    }, hash_data)


  def container
    @store ||=


You might have seen some nice green badges in your favorite GitHub repository, indicating whether the tests associated with the code passed or not. Adding these to the README is straightforward enough:

[![Build Status](]( 

[![Gem Version](](

[![Coverage Status](](

The first badge is provided by Travis CI, which you can read more about in the documentation.

The second is provided by RubyGems. You’ll notice on your gem’s page a “badge” link, which provides the required code and format (in this case, in Markdown format).

The third is provided by Coveralls. When you visit your repository page in the Coveralls application, you’ll see a link to “Get badge URLS”; from there, you can select the relevant format.

REPL-Driven Development

Tests and TDD are a critical part of the development process but won’t eliminate all bugs by themselves. This is where a tool such as Pry can help you to figure out how a piece of code works and the path that the code takes during a conditioned execution.

To use Pry, enter the pry command in the terminal. As long as Pry is installed and available from that directory, you’ll be dropped into a Pry session. To view all available commands, run the help command.

Testing a Local Gem Build

If you want to run the gem outside of the test suite, then you’ll want to use Pry. To do this, we’ll need to build the gem locally and then install that local build.

To build the gem, run the following command from your gem’s root directory: gem build sinderella.gemspec. This will generate a physical .gem file.

Once the gem is built and a .gem file has been created, you can install it from the local file with the following command: gem install ./sinderella-0.0.1.gem.

Notice that the built gem file includes the version number, so that you know you’re installing the right one (in case you’ve built multiple versions of the gem).

After installing the local version of the gem, you can open a Pry session and load the gem with require 'sinderella' and continue to execute your own Ruby code within Pry to test the gem as needed.

Releasing Your Gem

Once our gem has passed all of our tests and we’ve built and run it locally, we can look to release the gem to the Ruby community by pushing it to the RubyGems server.

To release our gem, we’ll use the Rake commands provided by Bundler. To view what commands are available, run rake --task. You’ll see something similar to the following output:

rake build    # Build sinderella-0.0.1.gem into the pkg directory
rake install  # Build and install sinderella-0.0.1.gem into system gems
rake release  # Create tag v0.0.1 and build and push sinderella-0.0.1.gem t...
rake spec     # Run RSpec code examples
  • rake build
    This first task does something similar to gem build sinderella.gemspec but placing the gem in a pkg (package) directory.
  • rake install
    The second task does the same as gem install ./sinderella-0.0.1.gem but saves us the extra typing.
  • rake release
    The third task is what we’re most interested at this point. It creates a tag in git, indicating the relevant version number, pulled from the version.rb file that Bundler created for us. It then builds the gem and pushes it to RubyGems.
  • rake spec
    The fourth task runs the tests using the test runner (in this case, RSpec), as defined and configured in the main Rakefile.

To release our gem, we’ll first need to make sure that the version number in the version.rb file is correct. If it is, then we’ll commit those changes and run the rake release task, which should give the following output:

❯ rake release
sinderella 0.0.1 built to pkg/sinderella-0.0.1.gem.
Tagged v0.0.1.
Pushed git commits and tags.
Pushed sinderella 0.0.1 to

Now we can view the details of the gem at, and other users may access our gem in their own code simply by including require 'sinderella'.


Thanks to the use of Bundler, the process of creating a gem boilerplate is made a lot simpler. And thanks to the principles of TDD and REPL-driven development, we know that we have a well-tested piece of code that can be reliably shared with the Ruby community.

(al, il)

© Mark McDonnell for Smashing Magazine, 2014.