Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

May 24 2018

11:30

Lessons Learned While Developing WordPress Plugins

Lessons Learned While Developing WordPress Plugins

Lessons Learned While Developing WordPress Plugins

Jakub Mikita
2018-05-24T13:30:28+02:002018-05-24T16:58:31+00:00

Every WordPress plugin developer struggles with tough problems and code that’s difficult to maintain. We spend late nights supporting our users and tear out our hair when an upgrade breaks our plugin. Let me show you how to make it easier.

In this article, I’ll share my five years of experience developing WordPress plugins. The first plugin I wrote was a simple marketing plugin. It displayed a call to action (CTA) button with Google’s search phrase. Since then, I’ve written another 11 free plugins, and I maintain almost all of them. I’ve written around 40 plugins for my clients, from really small ones to one that have been maintained for over a year now.

Measuring Performance With Heatmaps

Heatmaps can show you the exact spots that receive the most engagement on a given page. Find out why they’re so efficient for your marketing goals and how they can be integrated with your WordPress site. Read article →

Good development and support lead to more downloads. More downloads mean more money and a better reputation. This article will show you the lessons I’ve learned and the mistakes I’ve made, so that you can improve your plugin development.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

1. Solve A Problem

If your plugin doesn’t solve a problem, it won’t get downloaded. It’s as simple as that.

Take the Advanced Cron Manager plugin (8,000+ active installations). It helps WordPress users who are having a hard time debugging their cron. The plugin was written out of a need — I needed something to help myself. I didn’t need to market this one, because people already needed it. It scratched their itch.

On the other hand, there’s the Bug — fly on the screen plugin (70+ active installations). It randomly simulates a fly on the screen. It doesn’t really solve a problem, so it’s not going to have a huge audience. It was a fun plugin to develop, though.

Focus on a problem. When people don’t see their SEO performing well, they install an SEO plugin. When people want to speed up their website, they install a caching plugin. When people can’t find a solution to their problem, then they find a developer who writes a solution for them.

As David Hehenberger attests in his article about writing a successful plugin, need is a key factor in the WordPress user’s decision of whether to install a particular plugin.

If you have an opportunity to solve someone’s problem, take a chance.

2. Support Your Product

“3 out of 5 Americans would try a new brand or company for a better service experience. 7 out of 10 said they were willing to spend more with companies they believe provide excellent service.”

— Nykki Yeager

Don’t neglect your support. Don’t treat it like a must, but more like an opportunity.

Good-quality support is critical in order for your plugin to grow. Even a plugin with the best code will get some support tickets. The more people who use your plugin, the more tickets you’ll get. A better user experience will get you fewer tickets, but you will never reach inbox 0.

Every time someone posts a message in a support forum, I get an email notification immediately, and I respond as soon as I can. It pays off. The vast majority of my good reviews were earned because of the support. This is a side effect: Good support often translates to 5-star reviews.

When you provide excellent support, people start to trust you and your product. And a plugin is a product, even if it’s completely free and open-source.

Good support is more complex than about writing a short answer once a day. When your plugin gains traction, you’ll get several tickets per day. It’s a lot easier to manage if you’re proactive and answer customers’ questions before they even ask.

Here’s a list of some actions you can take:

  • Create an FAQ section in your repository.
  • Pin the “Before you ask” thread at the top of your support forum, highlighting the troubleshooting tips and FAQ.
  • Make sure your plugin is simple to use and that users know what they should do after they install it. UX is important.
  • Analyze the support questions and fix the pain points. Set up a board where people can vote for the features they want.
  • Create a video showing how the plugin works, and add it to your plugin’s main page in the WordPress.org repository.

It doesn’t really matter what software you use to support your product. The WordPress.org’s official support forum works just as well as email or your own support system. I use WordPress.org’s forum for the free plugins and my own system for the premium plugins.

3. Don’t Use Composer

Composer is package-manager software. A repository of packages is hosted on packagist.org, and you can easily download them to your project. It’s like NPM or Bower for PHP. Managing your third-party packages the way Composer does is a good practice, but don’t use it in your WordPress project.

I know, I dropped a bomb. Let me explain.

Composer is great software. I use it myself, but not in public WordPress projects. The problem lies in conflicts. WordPress doesn’t have any global package manager, so each and every plugin has to load dependencies of their own. When two plugins load the same dependency, it causes a fatal error.

There isn’t really an ideal solution to this problem, but Composer makes it worse. You can bundle the dependency in your source manually and always check whether you are safe to load it.

Composer’s issue with WordPress plugins is still not solved, and there won’t be any viable solution to this problem in the near future. The problem was raised many years ago, and, as you can read in WP Tavern’s article, many developers are trying to solve it, without any luck.

The best you can do is to make sure that the conditions and environment are good to run your code.

4. Reasonably Support Old PHP Versions

Don’t support very old versions of PHP, like 5.2. The security issues and maintenance aren’t worth it, and you’re not going to earn more installations from those older versions.

The Notification plugin’s usage on PHP versions from May 2018. (Large preview)

Go with PHP 5.6 as a minimal requirement, even though official support will be dropped by the end of 2018. WordPress itself requires PHP 7.2.

There’s a movement that discourages support of legacy PHP versions. The Yoast team released the Whip library, which you can include in your plugin and which displays to your users important information about their PHP version and why they should upgrade.

Tell your users which versions you do support, and make sure their website doesn’t break after your plugin is installed on too low a version.

5. Focus On Quality Code

Writing good code is tough in the beginning. It takes time to learn the “SOLID” principles and design patterns and to change old coding habits.

It once took me three days to display a simple string in WordPress, when I decided to rewrite one of my plugins using better coding practices. It was frustrating knowing that it should have taken 30 minutes. Switching my mindset was painful but worth it.

Why was it so hard? Because you start writing code that seems at first to be overkill and not very intuitive. I kept asking myself, “Is this really needed?” For example, you have to separate the logic into different classes and make sure each is responsible for a single thing. You also have to separate classes for the translation, custom post type registration, assets management, form handlers, etc. Then, you compose the bigger structures out of the simple small objects. That’s called dependency injection. That’s very different from having “front end” and “admin” classes, where you cram all your code.

The other counterintuitive practice was to keep all actions and filters outside of the constructor method. This way, you’re not invoking any actions while creating the objects, which is very helpful for unit testing. You also have better control over which methods are executed and when. I wish I knew this before I wrote a project with an infinite loop caused by the actions in the constructor methods. Those kinds of bugs are hard to trace and hard to fix. The project had to be refactored.

The above are but a few examples, but you should get to know the SOLID principles. These are valid for any system and any coding language.

When you follow all of the best practices, you reach the point where every new feature just fits in. You don’t have to tweak anything or make any exceptions to the existing code. It’s amazing. Instead of getting more complex, your code just gets more advanced, without losing flexibility.

Also, format your code properly, and make sure every member of your team follows a standard. Standards will make your code predictable and easier to read and test. WordPress has its own standards, which you can implement in your projects.

6. Test Your Plugin Ahead Of Time

I learned this lesson the hard way. Lack of testing led me to release a new version of a plugin with a fatal error. Twice. Both times, I got a 1-star rating, which I couldn’t turn into a positive review.

You can test manually or automatically. Travis CI is a continuous testing product that integrates with GitHub. I’ve built a really simple test suite for my Notification plugin that just checks whether the plugin can boot properly on every PHP version. This way, I can be sure the plugin is error-free, and I don’t have to pay much attention to testing it in every environment.

Each automated test takes a fraction of a second. 100 automated tests will take about 10 minutes to complete, whereas manual testing needs about 2 minutes for each case.

The more time you invest in testing your plugin up front, the more it will save you in the long run.

To get started with automated testing, you can use the WP-CLI \\`wp scaffold plugin-test\\` command, which installs all of the configuration you need.

7. Document Your Work

It’s a cliche that developers don’t like to write documentation. It’s the most boring part of the development process, but a little goes a long way.

Write self-documenting code. Pay attention to variable, function and class names. Don’t make any complicated structures, like cascades that can’t be read easily.

Another way to document code is to use the “doc block”, which is a comment for every file, function and class. If you write how the function works and what it does, it will be so much easier to understand when you need to debug it six months from now. WordPress Coding Standards covers this part by forcing you to write the doc blocks.

Using both techniques will save you the time of writing the documentation, but the code documentation is not going to be read by everyone.

For the end user, you have to write high-quality, short and easy-to-read articles explaining how the system works and how to use it. Videos are even better; many people prefer to watch a short tutorial than read an article. They are not going to look at the code, so make their lives easier. Good documentation also reduces support tickets.

Conclusion

These seven rules have helped me develop good-quality products, which are starting to be a core business at BracketSpace. I hope they’ll help you in your journey with WordPress plugins as well.

Let me know in the comments what your golden development rule is or whether you’ve found any of the above particularly helpful.

Smashing Editorial(il, ra, yk)

May 23 2018

10:00

Creating The Feature Queries Manager DevTools Extension

Creating The Feature Queries Manager DevTools Extension

Creating The Feature Queries Manager DevTools Extension

Ire Aderinokun
2018-05-23T12:00:00+02:002018-05-23T13:42:58+00:00

Within the past couple of years, several game-changing CSS features have been rolled out to the major browsers. CSS Grid Layout, for example, went from 0 to 80% global support within the span of a few months, making it an incredibly useful and reliable tool in our arsenal. Even though the current support for a feature like CSS Grid Layout is relatively great, not all recent or current browsers support it. This means it’s very likely that you and I will currently be developing for a browser in which it is not supported.

The modern solution to developing for both modern and legacy browsers is feature queries. They allow us to write CSS that is conditional on browser support for a particular feature. Although working with feature queries is almost magical, testing them can be a pain. Unlike media queries, we can’t easily simulate the different states by just resizing the browser. That’s where the Feature Queries Manager comes in, an extension to DevTools to help you easily toggle your feature query conditions. In this article, I will cover how I built this extension, as well as give an introduction to how developer tools extensions are built.

Working With Unsupported CSS

If a property-value pair (e.g. display: grid), is not supported by the browser the page is viewed in, not much happens. Unlike other programming languages, if something is broken or unsupported in CSS, it only affects the broken or unsupported rule, leaving everything else around it intact.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

Let’s take, for example, this simple layout:

The layout in a supporting browserLarge preview

We have a header spanning across the top of the page, a main section directly below that to the left, a sidebar to the right, and a footer spanning across the bottom of the page.

Here’s how we could create this layout using CSS Grid:

See the Pen layout-grid by Ire Aderinokun (@ire) on CodePen.

In a supporting browser like Chrome, this works just as we want. But if we were to view this same page in a browser that doesn’t support CSS Grid Layout, this is what we would get:

The layout in an unsupporting browser Large preview

It is essentially the same as if we had not applied any of the grid-related styles in the first place. This behavior of CSS was always intentional. In the CSS specification, it says:

In some cases, user agents must ignore part of an illegal style sheet, [which means to act] as if it had not been there

Historically, the best way to handle this has been to make use of the cascading nature of CSS. According to the specification, “the last declaration in document order wins.” This means that if there are multiple of the same property being defined within a single declaration block, the latter prevails.

For example, if we have the follow declarations:

body {
  display: flex;
  display: grid;
}

Assuming both Flexbox and Grid are supported in the browser, the latter — display: grid — will prevail. But if Grid is not supported by the browser, then that rule is ignored, and any previous valid and supported rules, in this case display: flex, are used instead.

body {
  display: flex;
  display: grid;
}

Cascading Problems

Using the cascade as a method for progressive enhancement is and has always been incredibly useful. Even today, there is no simpler or better way to handle simple one-liner fallbacks, such as this one for applying a solid colour where the rgba() syntax is not supported.

div {
    background-color: rgb(0,0,0);
    background-color: rgba(0,0,0,0.5);
}

Using the cascade, however, has one major limitation, which comes into play when we have multiple, dependent CSS rules. Let’s again take the layout example. If we were to attempt to use this cascade technique to create a fallback, we would end up with competing CSS rules.

See the Pen layout-both by Ire Aderinokun (@ire) on CodePen.

In the fallback solution, we need to use certain properties such as margins and widths, that aren’t needed and in fact interfere with the “enhanced” Grid version. This makes it difficult to rely on the cascade for more complex progressive enhancement.

Feature Queries To The Rescue!

Feature queries solve the problem of needing to apply groups of styles that are dependent on the support of a CSS feature. Feature queries are a “nested at-rule” which, like the media queries we are used to, allow us to create a subset of CSS declarations that are applied based on a condition. Unlike media queries, whose condition is dependent on device and screen specs, feature query conditions are instead based on if the browser supports a given property-value pair.

A feature query is made up of three parts:

  1. The @supports keyword
  2. The condition, e.g. display: flex
  3. The nested CSS declarations.

Here is how it looks:

@supports (display: grid) {
    body { display: grid; }
}

If the browser supports display: grid, then the nested styles will apply. If the browser does not support display: grid, then the block is skipped over entirely.

The above is an example of a positive condition within a feature query, but there are four flavors of feature queries:

  1. Positive condition, e.g. @supports (display grid)

  2. Negative condition, e.g. @supports not (display: grid)

  3. Conjunction, e.g. @supports (display:flex) and (display: grid)

  4. Disjunction, e.g. @supports (display:-ms-grid) or (display: grid)

Feature queries solve the problem of having separate fallback and enhancement groups of styles. Let’s see how we can apply this to our example layout:

See the Pen Run bunny run by Ire Aderinokun (@ire) on CodePen.

Introducing The Feature Queries Manager

When we write media queries, we test them by resizing our browser so that the styles at each breakpoint apply. So how do we test feature queries?

Since feature queries are dependent on whether a browser supports a feature, there is no easy way to simulate the alternative state. Currently, the only way to do this would be to edit your code to invalidate/reverse the feature query.

For example, if we wanted to simulate a state in which CSS Grid is not supported, we would have to do something like this:

/* fallback styles here */

@supports (display: grrrrrrrrid) {
    /* enhancement styles here */
}

This is where the Feature Queries Manager comes in. It is a way to reverse your feature queries without ever having to manually edit your code.

(Large preview)

It works by simply negating the feature query as it is written. So the following feature query:

@supports (display: grid) {
    body { display: grid; }
}

Will become the following:

@supports not (display: grid) {
    body { display: grid; }
}

Fun fact, this method works for negative feature queries as well. For example, the following negative feature query:

@supports not (display: grid) {
    body { display: block; }
}

Will become the following:

@supports not (not (display: grid)) {
    body { display: block; }
}

Which is actually essentially the same as removing the “not” from the feature query.

@supports (display: grid) {
    body { display: block; }
}

Building The Feature Queries Manager

FQM is an extension to your browser’s Developer Tools. It works by registering all the CSS on a page, filtering out the CSS that is nested within a feature query, and giving us the ability to toggle the normal or “inverted” version of that feature query.

Creating A DevTools Panel

Before I go on to how I specifically built the FQM, let’s cover how to create a new DevTools panel in the first place. Like any other browser extension, we register a DevTools extension with the manifest file.

{
  "manifest_version": 2,
  "name": "Feature Queries Manager",
  "short_name": "FQM",
  "description": "Manage and toggle CSS on a page behind a @supports Feature Query.",
  "version": "0.1",
  "permissions": [
    "tabs",
    "activeTab",
    "<all_urls>"
  ],
  "icons": {
    "128": "images/icon@128.png",
    "64": "images/icon@64.png",
    "16": "images/icon@16.png",
    "48": "images/icon@48.png"
  }
}

To create a new panel in DevTools, we need two files — a devtools_page, which is an HTML page with an attached script that registers the second file, panel.html, which controls the actual panel in DevTools.

The devtools script creates the panel pageLarge preview

First, we add the devtools_page to our manifest file:

{
  "manifest_version": 2,
  "name": "Feature Queries Manager",
  ...
  "devtools_page": "devtools.html",
}

Then, in our devtools.html file, we create a new panel in DevTools:

<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8"></head>
<body>
<!-- Note: I’m using the browser-polyfill to be able to use the Promise-based WebExtension API in Chrome -->
<script src="../browser-polyfill.js"></script>

<!-- Create FQM panel -->
<script>
browser.devtools.panels.create("FQM", "images/icon@64.png", "panel.html");
</script>
</body>
</html

Finally, we create our panel HTML page:

<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8"></head>
<body>
  <h1>Hello, world!</h1>
</body>
</html>

If we open up our browser, we will see a new panel called “FQM” which loads the panel.html page.

A new panel in browser DevTools showing the “Hello, World” textLarge preview

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Reading CSS From The Inspected Page

In the FQM, we need to access all the CSS referenced in the inspected document in order to know which are within feature queries. However, our DevTools panel doesn’t have direct access to anything on the page. If we want access to the inspected document, we need a content script.

The content script reads CSS from the HTML documentLarge preview

A content script is a javascript file that has the same access to the html page as any other piece of javascript embedded within it. To register a content script, we just add it to our manifest file:

{
      "manifest_version": 2,
      "name": "Feature Queries Manager",
      ...
      "content_scripts": [{
        "matches": [""],
        "js": ["browser-polyfill.js", "content.js"]
      }],
    }

In our content script, we can then read all the stylesheets and css within them by accessing document.styleSheets:

Array.from(document.styleSheets).forEach((stylesheet) => {
      let cssRules;
      
      try {
        cssRules = Array.from(stylesheet.cssRules);
      } catch(err) {
        return console.warn(`[FQM] Can't read cssRules from stylesheet: ${ stylesheet.href }`);
      }
      
      cssRules.forEach((rule, i) => {
      
        /* Check if css rule is a Feature Query */
        if (rule instanceof CSSSupportsRule) {
          /* do something with the css rule */
        }
        
      });
    });

Connecting The Panel And The Content Scripts

Once we have the rules from the content script, we want to send them over to the panel so they can be visible there. Ideally, we would want something like this:

The content script passes information to the panel and the panel sends instructions to modify CSS back to the contentLarge preview

However, we can’t exactly do this, because the panel and content files can’t actually talk directly to each other. To pass information between these two files, we need a middleman — a background script. The resulting connection looks something like this:

The content and panel scripts communicate via a background scriptLarge preview

As always, to register a background script, we need to add it to our manifest file:

{
  "manifest_version": 2,
  "name": "Feature Queries Manager",
  ...
  "background": {
    "scripts": ["browser-polyfill.js", "background.js"]
  },
}

The background file will need to open up a connection to the panel script and listens for messages coming from there. When the background file receives a message from the panel, it passes it on to the content script, which is listening for messages from the background. The background script waits for a response from the content script and relays that message back to the panel.

Here’s a basic of example of how that works:

// Open up a connection to the background script
const portToBackgroundScript = browser.runtime.connect();

// Send message to content (via background)
portToBackgroundScript.postMessage("Hello from panel!");

// Listen for messages from content (via background)
portToBackgroundScript.onMessage.addListener((msg) => {
  console.log(msg);
  // => "Hello from content!"
});
// backrgound.js

// Open up a connection to the panel script
browser.runtime.onConnect.addListener((port) => {
  
  // Listen for messages from panel
  port.onMessage.addListener((request) => {
  
    // Send message from panel.js -> content.js
    // and return response from content.js -> panel.js
    browser.tabs.sendMessage(request.tabId, request)
      .then((res) => port.postMessage(res));
  });
});
// content.js

// Listen for messages from background
browser.runtime.onMessage.addListener((msg) => {

  console.log(msg)
  // => "Hello from panel!"
  
  // Send message to panel
  return Promise.resolve("Hello from content!");
});

Managing Feature Queries

Lastly, we can get to the core of what the extension does, which is to “toggle” on/off the CSS related to a feature query.

If you recall, in the content script, we looped through all the CSS within feature queries. When we do this, we also need to save certain information about the CSS rule:

  1. The rule itself
  2. The stylesheet it belongs to
  3. The index of the rule within the stylesheet
  4. An “inverted” version of the rule.

This is what that looks like:

cssRules.forEach((rule, i) => {
  
  const cssRule = rule.cssText.substring(rule.cssText.indexOf("{"));
  const invertedCSSText = `@supports not ( ${ rule.conditionText } ) ${ cssRule }`;
  
  FEATURE_QUERY_DECLARATIONS.push({ 
    rule: rule,
    stylesheet: stylesheet,
    index: i, 
    invertedCSSText: invertedCSSText
  });
  
});

When the content script receives a message from the panel to invert all declarations relating to the feature query condition, we can easily replace the current rule with the inverted one (or vice versa).

function toggleCondition(condition, toggleOn) {
  FEATURE_QUERY_DECLARATIONS.forEach((declaration) => {
    if (declaration.rule.conditionText === condition) {
      
      // Remove current rule
      declaration.stylesheet.deleteRule(declaration.index);
      
      // Replace at index with either original or inverted declaration
      const rule = toggleOn ? declaration.rule.cssText : declaration.invertedCSSText;
      declaration.stylesheet.insertRule(rule, declaration.index);
    }    
  });
}

And that is essentially it! The Feature Query Manager extension is currently available for Chrome and Firefox.

Limitations Of The FQM

The Feature Queries Manager works by “inverting” your feature queries, so that the opposite condition applies. This means that it cannot be used in every scenario.

Fallbacks

If your “enhancement” CSS is not written within a feature query, then the extension cannot be used as it is dependent on finding a CSS supports rule.

Unsupported Features

You need to take note of if the browser you are using the FQM in does or does not support the feature in question. This is particularly important if your original feature query is a negative condition, as inverting it will turn it into a positive condition. For example, if you wrote the following CSS:

div { background-color: blue; }

@supports not (display: grid) {
  div { background-color: pink; }
}

If you use the FQM to invert this condition, it will become the following:

div { background-color: blue; }

@supports (display: grid) {
  div { background-color: pink; }
}

For you to be able to actually see the difference, you would need to be using a browser which does in fact support display: grid.

I built the Feature Queries Manager as a way to more easily test the different CSS as I develop, but it isn’t a replacement for testing layout in the actual browsers and devices. Developer tools only go so far, nothing beats real device testing.

Smashing Editorial(ra, yk, il)
09:42
Trying to Keep Up with All the Latest Trends? BeTheme Does

May 22 2018

09:45

How To Reduce The Need To Hand-Code Theme Parts In Your WordPress Website

How To Reduce The Need To Hand-Code Theme Parts In Your WordPress Website

How To Reduce The Need To Hand-Code Theme Parts In Your WordPress Website

Nick Babich
2018-05-22T11:45:05+02:002018-05-22T11:16:49+00:00

(This is a sponsored article.) Good design leads to sales and conversions on your website, but crafting great design is no easy task. It takes a lot of time and effort to achieve excellent results.

Design is a constantly evolving discipline. Product teams iterate on design to deliver the best possible experience to their users. A lot of things might change during each iteration. Designers will introduce changes, and developers will dive into the code to adjust the design. While jumping into code to solve an exciting problem might be fun, doing it to resolve a minor issue is the exact opposite. It’s dull. Imagine that you, as a web developer, continually get requests from the design team like:

  • Change the featured image.
  • Update the copy next to the logo in the header.
  • Add a custom header to the “About Us” page.

These requests happen all the time in big projects. It’s a never-ending stream of boring requests. Want to have fun while creating websites, focus on more challenging tasks, and complete your projects much faster?

Elementor helps with just that. It reduces the need to hand-code the theme parts of your website and frees you up to work on more interesting and valuable parts of the design.

Elementor Page Builder

For a long time, people dreamed that they would be able to put together a web page by dragging and dropping different elements together. That’s how page builders became popular. Page builders introduced a whole different experience of building a page — all actions involving content are done visually. They reduce the time required to produce a desirable structure.

What if we took the most popular CMS in the world and develop the most advanced page builder for it? That’s how Elementor 1.0 for WordPress was created. Here are a few features of the tool worth mentioning:

  • Live editing. Elementor provides instant live editing — what you see is what you get! The tool comes with a live drag-and-drop interface. This interface eliminates guesswork by allowing you to build your layout in real time.
  • Elementor comes with a ton of widgets, including for the most common website elements. Also, there are dozens of Elementor add-ons created by the community: https://wordpress.org/plugins/search/elementor/
  • Responsive design out of the box. The content you create using Elementor will automatically adapt to mobile devices, ensuring that your website is mobile-friendly. Your design will look pixel-perfect on any device.
  • Mobile-first design. The Elementor page builder lets you create truly a responsive website in a whole new visual way. Use different font sizes, padding and margins per device, or even reverse column ordering for users who are browsing your website using a mobile device.
  • Revision history. Elementor has a history browser that allows you to roll forward and backward through your changes. It gives you the freedom to experiment with a layout without fear of losing your progress.
  • Built-in custom CSS feature allow you to add your own styles. Elementor allows you to add custom CSS to every element, and to see it in action live in the editor.
  • Theme-independence. With Elementor, you’re not tied to a single theme. You can change the theme whenever you like, and your content will come along with you. This gives you, as a WordPress user, maximum flexibility and freedom to work with your favorite theme, or to switch themes and not have to worry about making changes.
  • Complete code reference and a lot of tutorials. Elementor is a developer-oriented product — it’s an open-source solution with a complete code reference. If you’re interested in creating your own solutions for Elementor, it’s worth checking the website https://developers.elementor.com. The website contains a lot of helpful tutorials and explanations.

There are two particular cases in which Elementor would be helpful to web developers:

  • Web developers who need to create an interactive prototype really quickly. Elementor can help in situations where a team needs to provide an interactive solution but doesn’t have enough time to code it.
  • Web developers who don’t want to be involved in post-development activities. Elementor is perfect when a website is developed for a client who wants to make a lot of changes themselves without having to write a single line of code.

Meet Elementor Pro 2.0 Theme Builder

Despite all of the advantages Elementor 1.0 had, it also had two severe limitations:

  • There were parts of a WordPress website that weren’t customizable. As a user, you were limited to a specific area of your website: the content that resides between the header and the footer. To modify other parts of the website (e.g. footer or header), you had to mess with the code.
  • It was impossible to create dynamic content. While this wouldn’t cause any problems if the website contained only static pages (such as an “About Us” page), it might be a roadblock if the website had a lot of dynamic content.

In an attempt to solve these problems, the Elementor team released the Elementor 2.0 Theme Builder, with true theme-building functionality. Elementor Pro 2.0 introduces a new way to build and customize websites. With Theme Builder, you don't have to code menial theme jobs anymore and can instead focus on deeper website functionality. You are able to design the entire page in the page builder. No header, no footer, just Elementor.

How Does Theme Builder Work?

The tool allows you to build a header, footer, single or archive templates, and other areas of a website using the same Elementor interface. To make that possible, Elementor 2.0 introduces the concept of global templates. Templates are design units. They’re capable of customizing each and every area of your website.

The process of creating a template is simple:

  1. Choose a template type.
  2. Build your page’s structure.
  3. Set the conditions that define where to apply your template.

Let’s explore each of these steps in more detail by creating a simple website. In the next section, we’ll build a company website that has a custom header and footer and dynamic content (a blog and archive). But before you start the process, make sure you have the latest version of WordPress, with the Elementor Pro plugin installed and activated. It is also worth mentioning that you should have a theme for your website. Elementor doesn’t replace your theme; rather, it gives you visual design capabilities over every part of the theme.

Custom Header And Footer

The header and footer are the backbone of every website. They are where users expect to see navigation options. Helping visitors navigate is a top priority for web designers.

Let’s start with creating a header. We’ll create a fairly standard header, with the company’s logo and main menu.

The process of creating a custom header starts with choosing a template. To create a new template, you’ll need to go to “Elementor” → “My Templates” → “Add New”.

Large preview

You’ll see a dialog box, “Choose Template Type”. Select “Header” from the list of options.

Choose the type of template you want to create. It can be a header, footer, single post page or archive page. (Large preview)

Once you choose a type of template, Elementor will display a list of blocks that fit that type of content. Blocks are predesigned layouts provided by Elementor. They save you time by proving common design patterns that you can modify to your own needs. Alternatively, you can create a template from scratch.

Choose either a predesigned block for your header, or build the entire menu from scratch. (Large preview)

Let’s choose the first option from the list (“Metro”). You can see that the top area of the page layout has a new object — a newly created header.

Large preview

Now, you need to customize the header according to your needs. Let’s choose a logo and define a menu. Click on the placeholder “Choose Your Image”, and select the logo from the gallery. It’s worth mentioning that the template embeds your website’s logo. This is good because if you ever change that logo at the website level, the header will automatically be updated on all pages. Next, click on the menu placeholder and select the website’s main menu.

Large preview

When the process of customization is finished, you need to implement the revised header on your website. Click the “Publish” button. The “Display Conditions” window will ask you to choose where to apply your template.

Every template contains the display conditions that define where it’s placed. Choose where the header will be shown. (Large preview)

The conditions define which pages your template will be applied to. It’s possible to show the header on all pages, to show it only on certain pages or to exclude some pages from showing the header. The latter case is helpful if you don’t want to show the header on particular pages.

Choose where you want to show the header. Want one header for the home page and another for the services page? Get it done in minutes. (Large preview)

As soon as you publish your template, Elementor will recognize it as a header and will use it on your website.

Now it’s time to create the footer for your website. The process is similar; the only difference is that this time you’ll need to choose the template named “Footer” and select the footer layout from the list of available blocks. Let’s pick the first option from the list (the one that says “Stay in Touch” on the dark background).

Choosing a block for a footer. (Large preview) Large preview

Next, we need to customize the footer. Change the color of the footer and the text label under the words “Stay in Touch”. Let’s reuse the color of the header for the footer. This will make the design more visually consistent.

Large preview

Finally, we need to choose display conditions. Similar to the header, we’ll choose to display the footer for the entire website.

Large preview

That’s all! You just built a brand new header and footer for your website without writing a single line of code. The other great news is that you don’t have to worry about how your design will look on mobile. Elementor does that for you. UI elements such as the top-level menu will automatically become a hamburger for mobile users.

Single Post for Blog

Let’s design a blog page. Unlike static pages, such as “About us”, the blog has dynamic content. Elementor 2.0 allows you to build a framework for your content. So, each time you write a new blog post, your content will automatically be added to this design framework.

The process of creating a blog page starts with selecting a template. For a single blog post, choose the template type named “Single.” We have two options of blocks to choose from. Let’s choose the first one.

Choosing a block for a single post. (Large preview)

The block you selected has all of the required widgets, so you don’t need to change anything. But it’s relatively easy to adjust the template if needed. A single post is made of dynamic widgets such as the post title, post content, featured image, meta data and so on. Unlike static widgets that display content that you enter manually, dynamic widgets draw content from the current pages where they’re applied. These widgets are in the “Elements” panel, under the category “Theme Elements”.

List of dynamic elements. A dynamic widget changes according to the page it’s used on. (Large preview)

When you work on dynamic content like a single post, you’ll want to see how it looks on different posts. Elementor gives you a preview mode so you can know exactly what your blog will look like.

To go into preview mode, you need to click on the Preview icon (the eye icon in the bottom-left part of the layout), and then “Settings”.

Never again work on the back end and guess what the front end will look like. Use preview mode to see how your templates will work for your content. (Large preview)

To see what your page will look like when it’s be filled with content, simply choose a source of content (e.g. a post from the “News” category).

Large preview Fill your template with content from your actual website to see what the result will look like. (Large preview)

Once you’ve finished creating dynamic content, you’ll need to choose when the template will be applied. Click on “Publish” button, and you’ll see a dialog that allows you to define conditions.

Choosing conditions for a single post template. (Large preview)

Archive

The archive page is a page that shows an assortment of posts. Your archive page makes it easy for readers to see all of your content and to dive deeper into the website. It’s also a common place to show search results.

The Theme Builder enables you to build your own archive using a custom taxonomy. To create an archive page, you need to go through the usual steps: create a new template, and choose a block for it. For now, Elementor provides only one type of block for this type of template (you can see it in the image below).

Large preview

After selecting this block, all you need to do is either set a source for your data or stick to the default selection. By default, the archive page shows all available blog posts. Let’s leave it as is.

Large preview

As you can see, we’ve successfully customized the website’s header, footer, single post and archive page, without any roadblocks of coding.

What To Expect In The Near Future

Elementor is being actively developed, with new features and exciting enhancements released all the time. This means that the theme builder is only going to get better. The Elementor team plans to add integration for plugins such as WooCommerce, Advanced Custom Fields (ACF), and Toolset. The team also welcomes feedback from developers. So, if you have a feature that you would like to have in Elementor, feel free to reach out to the Elementor team and suggest it.

Conclusion

When WordPress was released 15 years ago, the idea behind it was to save valuable time for developers and to make the process of content management as easy as possible. Today, it is widely regarded as a developer-friendly tool. Elementor is no different. The tool now offers never-before-seen flexibility to visually design an entire website. Don’t believe me? Try it for yourself! Explore Elementor Pro today.

Smashing Editorial(ms, ra, il, al, yk)

May 21 2018

10:00

Building Diverse Design Teams To Drive Innovation

Building Diverse Design Teams To Drive Innovation

Building Diverse Design Teams To Drive Innovation

Riri Nagao
2018-05-21T12:00:18+02:002018-05-21T10:03:12+00:00

There has been a surge of conversations about the tech industry lacking diversity. Companies are therefore encountering barriers in innovation. The current state of technology faces inequality and privilege, a consequence of having limited voices represented in the design and product development process. In addition, we live in a challenged political and socio-economic state where it’s easier to be divided than come together despite differences.

Design’s role in companies is becoming less about visual appeal and more about hitting business goals and creating value for users. Therefore, the need to build teams with diverse perspectives is becoming imperative. Design will not only be critical to solving problems on the product and experience level, but also relevant on a bigger scale to close social divides and to create inclusive communities.

Working Together

Creating a team who can work well together across different disciplines can be hard. Rachel Andrew solicits some suggestions from the speakers at our upcoming SmashingConf in Toronto. Read article →

What Is Diversity And Why Is It Important?

Diversity is in perspectives and values, which are influenced by both inherit traits (such as ethnicity, gender, age, sexual orientation) as well as acquired traits that are gained from various life experiences (cultural influences, education, social circle, etc.). A combination of traits shape people’s identity and the way they think.

In particular, conflicts and adversities experienced by people have a significant influence on how they develop their values. The more an individual has stepped outside their comfort zone, the more unique of a perspective they bring to the table and an expanded capacity to be compassionate towards others.

Diversity is important because it directly affects long-term success, innovation, and growth. Advantages of working on a diverse team include increased collaboration, effective communication, well-rounded sets of skills represented, less susception to complacency, and active efforts for inclusivity are made earlier in the process.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

What Is The Competing Values Framework?

The positive correlation between diversity and innovation are undeniable. So how exactly does it work? Having differing and oftentimes clashing perspectives on a team seems to hinder progress rather than drive it. But with the right balance of values, this dynamic is extremely advantageous. Design, as a problem-solving discipline, uses insights to drive innovation, which can only manifest between differences, not commonalities. When different perspectives and values are represented, blind spots become more apparent and implicit biases are challenged.

This is illustrated in the Competing Values Framework, a robust blueprint that was devised by Quinn and Rohrbaugh, based on researching qualities of companies that have sustainably produced a steady stream of innovative solutions over the years. This model for organizational effectiveness shows how different perspectives translate into business values, as well as show where their weaknesses are.

These are categorized into “quadrants” as follows:

# The CVF can help you build teams that are optimized for any goal. (Image source)

1. Collaborate

People with characteristics from the Collaborate quadrant are committed to cooperating together based on shared values. They foster trust with each other and with their audience through compassion and empathy. Their priorities are long-term growth of communities and commit to learning and mentoring. While a sense of unity might help a team be more purpose-driven, this can discourage individuals who think differently to bring new ideas to the table because they are averse to taking risks. People here also lose sight of the realities of constraints because they look too far ahead.

2. Create

While most people are hesitant to change and innovation, those in this quadrant embrace it. They’re extremely flexible with a shifting landscape of user and business goals and aren’t afraid of taking risks. Creatives see risk as an opportunity for growth and embrace different ways of thinking to come up with solutions. Trends are set by creatives, not followed. In contrast, however, those in this quadrant aren’t as logical and practical with the execution needed to bring ideas to life. Their flexibility can become chaotic and unpredictable. Taking risks can pay off significantly but it’s more detrimental without a foundation.

3. Compete

As the name implies, people here are competitive and focus on high performance and big results. They’re excellent decision makers, which is why they get things done quickly. They know exactly how to utilize resources around them to beat competitors and get to the top of the market. Competitors stay focused on the business objectives of increasing revenue and hitting target metrics. On the other hand, they’re not as broad of a visionary in the long run. Since they prioritize immediate results. Because of this, they may not be as compassionate towards their audience and not consider the human side of company growth.

4. Control

People in this quadrant focus on creating systems that are reliable and efficient. They’re practical and can plan strategically for scaling, and they constantly revisit their design processes to optimize for productivity. They are extremely detail oriented and can identify areas of opportunities in the unexpected. They’re also experts at dealing with multiple moving parts and turn chaos into harmony. But if there are too many Control qualities on a team, they become vulnerable to falling into complacency since they depend on reliable systems. They are averse to taking risks and fear the nature of unpredictability.

People and their values don’t always neatly fit into categories but this framework is flexible in helping teams identify their strengths and weaknesses. Many individuals have traits that cover more than one quadrant but there are definitely dominant qualities. Being able to identify what they are on an individual level, as well as within a team and at the company level is important.

How Do We Use The CVF To Build Diverse Teams?

There are already many great design processes and frameworks that takes aspects of the CVF to help teams take advantage of diverse perspectives. The sprint model, developed by the design partners at Google Ventures, is an excellent workflow that brings together differing values and skill sets to solve problems, with an emphasis on completing it in a short amount of time. IDEO’s design thinking process, also referred to human-centered design, puts users at the forefront and drive decisions with empathy with collaboration being at the core.

Note: Learn more about GV’s Design Sprint model and IDEO’s Design Thinking approach.

The CVF complements many existing design processes to help teams bring their differing perspectives together and design more holistically. In order to do this, teams need to evaluate where they are, how to fit in the company and how well that aligns with their priorities. They should also identify the missing voices and assess areas for improvement. They need to be asking themselves,

What has the team dynamic been like for the past year? What progress has been made? What goals (business/user/team) are the most important?

The Competing Values Framework assessment is a practical way to (1) establish the desired organizational outcomes and goals, (2) evaluate the current practices of teams within the organization/company and how they manage workflows, and (3) the individual’s role and how they fit into the context of the team and company.

For example, a team that may not have had many roadblocks and disagreements may represent too much of the Collaborate quadrant and need people who represent more of the Compete quadrant to drive results. A team that has taken risks has had failures, and has dealt with many moving parts (Create) may need people who have characteristics of the Control quadrant for stability and scaling on a practical level to drive results and growth.

If teams can expand by hiring more, they should absolutely onboard more innovators who bring different perspectives and strengths. But teams should also keep in mind that it’s absolutely possible to work with what they already have and can utilize resources at their disposal. Here are some practical ways that teams can increase diversity:

Hire For Diversity

When hiring, it’s important to find people with unique perspectives to complement existing designers and stakeholders. Writing inclusive job descriptions to attract a wider range of candidates makes a big difference. Involving people from all levels and backgrounds within the company who are willing to embrace new perspectives is essential. Hiring managers should ask thoughtful questions to gage how well candidates exercise their problem-solving skills and empathy in real-life business cases. Not making assumptions about others, even with something simple like their pronouns, can establish safe work environments and encourage people to be open about their views and values.

Step Outside The Bubble

Whether this would be directly for client work or for building team rapport, it’s valuable to get people out of the office to experience things outside of their familiar scope. It’s worthwhile for design teams to interact with users and spend time in their shoes, not only for their own work as UX practitioners but also to help expand their worldview. They should be encouraged to constantly learn something new. They should be given opportunities to travel to places that are completely different from their comfort zone. Teams should also be encouraged go to design events and learn from industry experts who do similar work but in different contexts. Great ideas emerge when people experience things outside their routine and therefore, should always get out and learn!

Drive Diversity Initiatives Internally

Hosting in-house hackathons to get teams to interact differently allows designers to expand their skills while learning new approaches to problem solving. It is also an opportunity to work with people from other teams and acquire the skills to adapt quickly. Bringing in outside experts to share their wisdom is a great way for teams to learn new ways of thinking. Some companies, especially larger organizations, have communities based on interests outside of work such as the love for food or interest in outdoors activities. Teaching each other skills through internal workshops is also great.

Foster A Culture Of Appreciation

Some companies have weekly roundtable session where each person on the team shares one thing he or she is appreciative about another person. Not only does this encourage high morale but also empowers teams to produce better work. At the same time, teams are given a safe space to be vulnerable with each other and take risks. This is an excellent way to bond over goals and get teams with differing perspectives together to collaborate.

What Should Diverse Teams Keep In Mind?

Acknowledging that while different ideas and values are important, they can clash if conversations are not managed effectively. Discrimination and segregation can happen. But creating a workspace and team dynamic that is open to discussion and a safe space to challenge existing ideas is crucial. People should be open to being challenged and ask questions, rather than get defensive about their ideas. Compromise will be necessary in this process.

When diversity isn’t managed actively, or there is an imbalance of values on a team, several challenges arise:

  • Communication barriers — How people say things can be different from how others hear and understand them. Misunderstandings could lead to crucial voices not always being heard. If a particular style of communication is accepted over others, people fear speaking up. They might hold the wisdom to make design decisions that could impact the business. If a culture of openness doesn’t exist, a lot of those gold mines never get their opportunities to see the light of day.
  • Discrimination and segregation — As teams become more diverse, people can stray away from or avoid others who think differently. This can lead to increased feelings of resentment, leading to segregation and even discrimination. People might be quick to judge one another based on stereotypical references, rather than mustering the courage to understand where they come from.
  • Competition over collaboration — People on design teams need to work collaboratively but when different perspectives clash and aren’t encouraged to use their perspectives to create value for the company, they become competitive against each other rather than have the willingness to work together. It’s important to bring the team back to the main goal.

Embracing different perspectives takes courage but it’s everyone’s responsibility to be mindful of one another. Being surrounded by people with different perspectives is certainly uncomfortable and can be a stretch outside their comfort zones. Design teams are positioned advantageously to do so and be role models to others on its impact. Conversations about leveraging differing perspectives should happen as early in the process as possible to limit friction and encourage effective collaboration.

Conclusion And Next Steps

Rather than approach it as an obligation and something with a lot of risk, leaders should see it as a benefit to their company and team’s growth. It is often said that roadblocks are a sign of innovation. Therefore, when designers in a team are faced with challenges, they are able to innovate. And only through the existence different perspectives can such challenges emerge. Assessing where the company, teams, and individuals are within the CVF quadrants is a great start and taking steps to building a team with complementing perspectives will be key to driving long-term innovation.


I’d like to personally thank the following contributors for taking their time to providing me with insights on hiring for and building diverse design teams: Samantha Berg, Khanh Lam, Arin Bhowmick, Rob Strati, Shannon O’Brien, Diego Pulido, Nathan Gao, Christopher Taylor Edwards, among many others who engaged in discussions with me on this topic. Thank you for allowing me to take your experiences and being part of facilitating this dialogue on the value of diversity in design.

Smashing Editorial(cc, ra, yk, il)

May 18 2018

13:45

The Future Is Here! Augmented And Virtual Reality Icon Set

The Future Is Here! Augmented And Virtual Reality Icon Set

The Future Is Here! Augmented And Virtual Reality Icon Set

The Smashing Editorial
2018-05-18T15:45:28+02:002018-05-18T15:22:16+00:00

What once sounded like science fiction has become a reality: All you need is to grab a VR headset or simply use your web browser and you suddenly find yourself in an entirely different place, a different time, or in the middle of your favorite game.

Augmented and virtual reality are changing the way we experience and interact with the world around us — from the way we consume media and shop to the way we communicate and learn. Careless of whether you’re skeptical of this evolution or just can’t wait to fully immerse yourself in virtual worlds, one thing is for sure: Exciting times are ahead of us.

Augmented And Virtual Reality Icon Set

To share their excitement about AR and VR, the creative folks at Vexels have designed a free set of 33 icons that take you on a journey through the new technology as well as the worlds it encompasses. The set includes useful icons of devices but also cute, cartoonish illustrations of people interacting with them. All icons are available in four formats (PNG, EPS, AI, and SVG) so you can resize and customize them until they match your project’s visual style perfectly. Happy exploring!

Further Freebies on SmashingMag:

What if there was a web conference without... slides? Meet SmashingConf Toronto 2018 🇨🇦 with live sessions exploring how experts work behind the scenes. Dan Mall, Lea Verou, Sara Soueidan, Seb Lee-Delisle and many others. June 26–27. With everything from live designing to live performance audits.

Check the speakers →

Please Give Credit Where Credit Is Due

This set is released under a Creative Commons Attribution 3.0 Unported, i.e. you may modify the size, color and shape of the icons. Attribution is required, so if you would like to spread the word in blog posts or anywhere else, please do remember to credit Vexels as well as provide a link to this article.

Here’s a sneak peek of some of the icons:

Full Preview Of The Icon Set

Insights From The Designers

It seems like everyday there’s news about augmented and virtual reality devices and products, and we couldn’t be more excited about it! We can’t wait until cute animals guide us through the streets, video games come to life and we fully immerse in movies. When designing this pack we got carried away and ended up creating icons, avatars and illustrations of different people interacting with virtual worlds. We hope you find these devices, cartoons and scenarios both useful and exciting. The future is here!‌

Download The Icon Set For Free

A big Thank You to Vexels for designing this wonderful icon set — we sincerely appreciate your time and effort! Keep up the fantastic work!

Thanks to Cosima Mielke for helping to prepare this article.

11:51

Monthly Web Development Update 5/2018: Browser Performance, Iteration Zero, And Web Authentication

Monthly Web Development Update 5/2018: Browser Performance, Iteration Zero, And Web Authentication

Monthly Web Development Update 5/2018: Browser Performance, Iteration Zero, And Web Authentication

Anselm Hannemann
2018-05-18T13:51:17+02:002018-05-18T15:22:16+00:00

As developers, we often talk about performance and request browsers to render things faster. But when they finally do, we demand even more performance.

Alex Russel from the Chrome team now shared some thoughts on developers abusing browser performance and explains why websites are still slow even though browsers reinvented themselves with incredibly fast rendering engines. This is in line with an article by Oliver Williams in which he states that we’re focusing on the wrong things, and instead of delivering the fastest solutions for slower machines and browsers, we’re serving even bigger bundles with polyfills and transpiled code to every browser.

It certainly isn’t easy to break out of this pattern and keep bundle size to a minimum in the interest of the user, but we have the technologies to achieve that. So let’s explore non-traditional ways and think about the actual user experience more often — before defining a project workflow instead of afterward.

Front-End Performance Checklist 2018

To help you cater for fast and smooth experiences, Vitaly Friedman summarized everything you need to know to optimize your site’s performance in one handy checklist. Read more →

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

News

General

  • Oliver Williams wrote about how important it is that we rethink how we’re building websites and implement “progressive enhancement” to make the web work great for everyone. After all, it’s us who make the experience worse for our users when blindly transpiling all our ECMAScript code or serving tons of JavaScript polyfills to those who already use slow machines and old software.
  • Ian Feather reveals that around 1% of all requests for JavaScript on BuzzFeed time out. That’s about 13 million requests per month. A good reminder of how important it is to provide a solid fallback, progressive enhancement, and workarounds.
  • The new GDPR (General Data Protection Regulation) directive is coming very soon, and while our inboxes are full of privacy policy updates, one thing that’s still very unclear is which services can already provide so-called DPAs (Data Processing Agreements). Joschi Kuphal collects services that offer a DPA, so that we can easily look them up and see how we can obtain a copy in order to continue using their services. You can help by contributing to this resource via Pull Requests.

UI/UX

Product design principlesHow to create a consistent, harmonious user experience when designing product cards? Mei Zhang shares some valuable tips. (Image credit)

Security

Privacy

  • The GDPR Checklist is another helpful resource for people to check whether a website is compliant with the upcoming EU directive.
  • Bloomberg published a story about the open-source privacy-protection project pi-hole, why it exists and what it wants to achieve. I use the software daily to keep my entire home and work network tracking-free.
GDPR Compliance ChecklistAchieving GDPR Compliance shouldn’t be a struggle. The GDPR Compliance Checklist helps you see clearer. (Image credit)

Web Performance

  • Postgres 10 has been here for quite a while already, but I personally struggled to find good information on how to use all these amazing features it brings along. Gabriel Enslein now shares Postgres 10 performance updates in a slide deck, shedding light on how to use the built-in JSON support, native partitioning for large datasets, hash index resiliency, and more.
  • Andrew Betts found out that a lot of websites are using outdated headers. He now shares why we should drop old headers and which ones to serve instead.

Accessibility

Page previewsPage previews open possibilities in multiple areas, as Nirzar Pangarkar explains. (Image credit: Nirzar Pangarkar)

CSS

  • Rarely talked about for years, CSS tables are still used on most websites to show (and that’s totally the correct way to do so) data in tables. But as they’re not responsive by default, we always struggled when making them responsive and most of us used JavaScript to make them work on mobile screens. Lea Verou now found two new ways to achieve responsive tables by using CSS: One is to use text-shadow to copy text to other rows, the other one uses element() to copy the entire <thead> to other rows — I still try to understand how Lea found these solutions, but this is amazing!
  • Rachel Andrew wrote an article about building and providing print stylesheets in 2018 and why they matter a lot for users even if they don’t own a printer anymore.
  • Osvaldas Valutis shares how to implement the so-called “Priority Plus” navigation pattern mostly with CSS, at least in modern browsers. If you need to support older browsers, you will need to extend this solution further, but it’s a great start to implement such a pattern without too much JavaScript.
  • Rachel Andrew shares what’s coming up in the CSS Grid Level 2 and Subgrid specifications and explains what it is, what it can solve, and how to use it once it is available in browsers.

JavaScript

  • Chris Ashton “used the web for a day with JavaScript turned off.” This piece highlights the importance of thinking about possible JavaScript failures on websites and why it matters if you provide fallbacks or not.
  • Sam Thorogood shares how we can build a “native undo & redo for the web”, as used in many text editors, games, planning or graphical software and other occasions such as a drag and drop reordering. And while it’s not easy to build, the article explains the concepts and technical aspects to help us understand this complicated matter.
  • There’s a new way to implement element/container queries into your application: eqio is a tiny library using IntersectionObserver.

Work & Life

  • Johannes Seitz shares his thoughts about project management at the start of projects. He calls the method “Iteration Zero”. An interesting concept to understand the scope and risks of a project better at a time when you still don’t have enough experience with the project itself but need to build a roadmap to get things started.
  • Arestia Rosenberg shares why her number one advice for freelancers is to ‘lean into the moment’. It’s about doing work when you can and using your chance to do something else when you don’t feel you can work productively. In the end, the summary results in a happy life and more productivity. I’d personally extend this to all people who can do that, but, of course, it’s best applicable to freelancers indeed.
  • Sam Altman shares a couple of handy productivity tips that are not just a ‘ten things to do’ list but actually really helpful thoughts about how to think about being productive.

Going Beyond…

  • Ethan Marcotte elaborates on the ethical issues with Google Duplex that is designed to imitate human voice so well that people don’t notice if it’s a machine or a human being. While this sounds quite interesting from a technical point of view, it will push the debate about fake news much further and cause more struggle to differentiate between something a human said or a machine imitated.
  • Our world is actually built on promises, and here’s why it’s so important to stick to your promises even if it’s hard sometimes.
  • I bet that most of you haven’t heard of Palantir yet. The company is funded by Peter Thiel and is a data-mining company that has the intention to collect as much data as possible about everybody in the world. It’s known to collaborate with various law enforcement authorities and even has connections to military services. What they do with data and which data they have from us isn’t known. My only hope right now is that this company will suffer a lot from the EU GDPR directive and that the European Union will try to stop their uncontrolled data collection. Facebook’s data practices are nothing compared to Palantir it seems.
  • Researchers sound the alarm after an analysis showed that buying a new smartphone consumes as much energy as using an existing phone for an entire decade. I guess I’ll not replace my iPhone 7 anytime soon — it’s still an absolutely great device and just enough for what I do with it.
  • Anton Sten shares his thoughts on Vanity Metrics, a common way to share numbers and statistics out of context. And since he realized what relevancy they have, he thinks differently about most of the commonly readable data such as investments or usage data of services now. Reading one number without having a context to compare it to doesn’t matter at all. We should keep that in mind.

We hope you enjoyed this Web Development Update. The next one is scheduled for Friday, June 15th. Stay tuned.

Smashing Editorial(cm)

May 17 2018

15:58
9 Effective Invoicing and Time Management Apps You Should Pay Attention To
12:10

More Than Pixels: Selling Design Discovery

More Than Pixels: Selling Design Discovery

More Than Pixels: Selling Design Discovery

Kyle Cassidy
2018-05-17T14:10:03+02:002018-05-18T15:22:16+00:00

As designers, we know that research should play a pivotal role in any design process. Sadly, however, there are still a lot of organizations that do not see the value of research and would rather jump straight into the visual design stage of the design process.

The excuses given here tend to be:

“We already know what our customers want.”

“We don’t have the time/budget/people.”

“We’ll figure out the flaws in BETA.”

As designers, it is important that we are equipped to be able to have conversations with senior stakeholders to be able to sell and justify the importance of the so-called “Design Discovery” within the design process.

In this article, I’ll demystify what is meant by the term “Design Discovery” to help you better establish the importance of research within the creative process. I’ll also be giving advice on how to handle common pushbacks, along with providing various hints and tips on how to select the best research methods when undertaking user research.

My hope is that by reading this article, you will become comfortable with being able to sell “Design Discovery” as part of the creative process. You will know how to build a “Discovery Plan” of activities that answers all the questions you and your client need to initiate the design process with a clear purpose and direction.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Design With A Purpose

Digital design is not just about opening up Photoshop or Sketch and adding colors, shapes, textures, and animation to make a beautiful looking website or app.

As designers, before putting any pixels on canvas, we should have a solid understanding of:

  1. Who are the users we are designing for?
  2. What are the key tasks those users want to accomplish?

Ask yourself, is the purpose of what you are producing? Is it to help users:

  • Conduct research,
  • Find information,
  • Save time,
  • Track fitness,
  • Maintain a healthy lifestyle,
  • Feel safe,
  • Organize schedules,
  • Source goods,
  • Purchase products,
  • Gather ideas,
  • Manage finances,
  • Communicate,
  • Or something entirely different?

Understanding the answers to these questions should inform your design decisions. But before we design, we need to do some research.

Discovery Phase

Any design process worth its salt should start with a period of research, which (in agency terms) is often referred to as a “Discovery Phase”. The time and budget designers can allocate to a Discovery phase is determined by many factors such as the amount of the client’s existing project research and documentation as well as the client’s budget. Not to mention your own personal context, which we will come to later.

Business And User Goals

In a Discovery phase, we should ensure adequate time is dedicated to exploring both business and user goals.

Yes, we design experiences for users, but ultimately we produce our designs for clients (be that internal or external), too. Clients are the gatekeepers to what we design. They have the ultimate say over the project and they are the ones that hold the purse strings. Clients will have their own goals they want to achieve from a project and these do not always align with the users’ goals.

In order to ensure what we design throughout our design process hits the sweet spot, we need to make sure that we are spending time exploring both the business and user goals for the project (in the Research/Discovery phase).

business and user goals Your Discovery phase should explore both user and business goals. (Large preview)

Uncovering Business Goals

Typically, the quickest way to establish the business goals for a project is to host a stakeholder workshop with key project stakeholders. Your aim should be to get as many representatives from across different business functions as possible into one room to discuss the vision for the project (Marketing, Finance, Digital, Customer Services, and Sales).

Tip: Large organizations often tend to operate in organizational silos. This allows teams to focus on their core function such as marketing, customer care, etc. It allows staff to be effective without being distracted by activities where they have no knowledge and little or no skills. However, it often becomes a problem when the teams don’t have a singular vision/mission from leadership, and they begin to see their area as the driving force behind the company’s success. Often in these situations, cross-departmental communication can be poor to non-existent. By bringing different members from across the organization together in one room, you get to the source of the truth quicker and can link together internal business processes and ways of working.

The core purpose of the stakeholder workshop should be:

  1. To uncover the Current State (explore what exists today in terms of people, processes, systems, and tools);
  2. To define the Desired Future State (understand where the client wants to get to, i.e. their understandig of what the ideal state should look like);
  3. To align all stakeholders on the Vision for the project.
project vision Use workshops to align stakeholders around the vision and define the Desired Future State. (Large preview)

There are a series of activities that you can employ within your stakeholder workshop. I tend to typically build a full workshop day (7-8 hours) around 4-5 activities allowing 45mins uptil 1 hour for lunch and two 15-min coffee breaks between exercises. Any more that than, and I find energy levels start to dwindle.

I will vary the workshop activities I do around the nature of the project. However, each workshop I lead tends to include the following three core activities:

Activity Purpose Business Model Canvas To explore the organizations business model and discuss where this project fits this model. Measurement Plan Define what are the most important business metrics the business wants to be able to measure and report on. Proto Personas and User Stories Explore who the business feels their users are and what are the key user stories we need to deliver against.

Tip: If you’re new to delivering client workshops, I’ve added a list of recommended reading to the references section at the bottom of this article which will give you useful ideas on workshop activities, materials, and group sizes.

Following the workshop, you’ll need to produce a write up of what happened in the workshop itself. It also helps to take lots of photos on the workshop day. The purpose of the write-up should be to not only explain the purpose of the day and key findings, but also recommendations of next steps. Write-ups can be especially helpful for internal communication within the organization and bringing non-attendees up to speed with what happened on the day as well as agreeing on the next steps for the project.

Uncovering User Goals

Of course, Discovery is not just about understanding what the organization wants. We need to validate what users actually want and need.

With the business goals defined, you can then move on to explore the user goals through conducting some user research. There are many different user research methods you can employ throughout the Discovery process from Customer Interviews and Heuristic Evaluations to Usability Tests and Competitor Reviews, and more.

Having a clear idea of the questions you are looking to answer and available budget is the key to helping select the right research methods. It is, for this reason, important that you have a good idea of what these are before you get to this point.

Before you start to select which are the best user research methods to employ, step back and ask yourself the following question:

“What are the questions I/we as a design team need answers to?”

For example, do you want to understand:

  • How many users are interacting with the current product?
  • How do users think your product compares to a competitor product?
  • What are the most common friction points within the current product?
  • How is the current product’s performance measured?
  • Do users struggle to find certain key pieces of information?

Grab a pen and write down what you want to achieve from your research in a list.

Tip: If you know you are going to be working on a fixed/tight budget, it is important to get confirmation on what that budget may look like at this point since this will have some bearing on the research methods you choose.

Another tip: User research does not have to happen after organizational research. I always find it helps to do some exploratory research prior to running stakeholder workshops. This ensures you go into the room with a baseline understanding of the organization its users and some common pain points. Some customers may not know what users do on their websites/apps nowadays; I like to go in prepared with some research to hand whether that be User Testing, Analytics Review or Tree Testing outputs.

Selecting Research Methods

The map below from the Nielsen Norman Group (NNG) shows an overview of 20 popular user research methods plotted on a 3-dimensional framework. It can provide a useful guide for helping you narrow down on a set of research methods to use.

top 20 research methods A map of the top 20 research methods from NNG. (Large preview)

The diagram may look complicated, but let us break down some key terms.

Along the x-axis, research methods are separated by the types of data they produce.

  • Quantitative data involves numbers and figures. It is great for answering questions such as:

    • How much?
    • How many?
    • How long?
    • Impact tracking?
    • Benchmarking?
  • Qualitative data involves quote, observations, photos, videos, and notes.

    • What do users think?
    • How do users feel?
    • Why do users behave in a certain way?
    • What are users like?
    • What frustrates users?

Along the y-axis, research methods are separated by the user inputs.

  • Behavioral Data
    This data is based on what users do (outcomes).
  • Attitudinal Data
    This data is based on attitudes and opinions.

Finally, research methods are also classified by their context. Context explains the nature of the research, some research methods such as interviews require no product at all. Meanwhile, usability tests require users to complete scripted tasks and tell us how they think and feel.

Using the Model

Using your question list, firstly identify whether you are looking to understand users opinions (what people say) or actions (what people do) and secondly whether you are looking to understand why they behave in a certain way (why and how to fix) or how many of them are behaving in a certain way (how many and how much).

Now look at this simplified version of the matrix, and you should be able to work out which user research methods to focus in on.

selecting research methods Think about what questions you’re trying to answer when selecting research methods. (Large preview)

Model Examples

Example 1

If you’re looking to understand users’ attitudes and beliefs and you don’t have a working product then ‘Focus Groups’ or ‘Interviews’ would be suitable user research methods.

top 20 research methods Large preview

Example 2

If you want to understand how many users are interacting with the current website or app then an ‘Analytics Review’ would be the right research method to adopt. Meanwhile, if you want to test how many people will be impacted by a change, A/B testing would be a suitable method.

top 20 research methods Large preview

No Silver Bullet

By now you should realize there is no shortcut to the research process; not one single UX research method will provide all the answers you need for a project.

Analytics reviews, for example, are a great low-cost way to explore behavioral, quantitative data about how users interact with an existing website or application.

However, this data falls short of telling you:

  • Why users visited the site/app in the first place (motivation);
  • What tasks they were looking to accomplish (intent);
  • If users were successful in completing their tasks (task completion);
  • How users found their overall experience (satisfaction).

These types of questions are best answered by other research methods such as ‘Customer Feedback’ surveys (also known as ‘Intercept Surveys’) which are available from tools such as Hotjar, Usabilla, and Qualaroo.

Usabilla Usabilla’s quick feedback button allows users to provide instant feedback on their experience. (Large preview)

Costing Research/Discovery

In order to build a holistic view of the user experience, the Research/Discovery process should typically last around 3 to 4 weeks and combine a combination of the different research methods.

Use your list of questions and the NNG matrix to help you decide on the most suitable research methods for your project. Wherever possible, try to use complimentary research methods to build a bigger picture of users motivations, drivers, and behaviors.

four research methods Your Design Discovery process should combine different types of data. (Large preview)

Tip: The UX Recipe tool is a great website for helping you pull together the different research methods you feel you need for a project and to calculate the cost of doing so.

Which brings me on to my next point.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Contexts And Budgets

The time and budget which you can allocate to Discovery will vary greatly depending on your role. Are you working in-house, freelance, or in an agency? Some typical scenarios are as follows:

  • Agency
    Clients employ agencies to build projects that generate the right results. To get the right results, you firstly need to ensure you understand both the business’ needs and the needs of the users as these are almost always not the same. Agencies almost always start with a detailed Discovery phase often led by the UX Design team. Budgets are generally included in the cost of the total project, as such ample time is available for research.
  • In-House: Large Company
    When working in a large company, you are likely to already have a suite of tools along with a program of activity you’re using to measure the customer experience. Secondly, you are likely to be working alongside colleagues with specialist skills such as Data Analysts, Market Researchers, and even a Content Team. Do not be afraid to say hello to these people and see if they will be willing to help you conduct some research. Customer service teams are also worth befriending. Customer service teams are the front line of a business where customer problems are aired for all to see. They can be a goldmine of useful information. Go spend some time with the team, listen to customer service calls, and review call/chat logs.
  • In-House: Smaller Company
    When working as part of an in-house team in a smaller company, you are likely to be working on a tight budget and are spread across a lot of activities. Nevertheless, with some creative thinking, you can still undertake some low-cost research tasks such as Site Intercept surveys, Analytics reviews, and Guerilla testing, or simply review applied research.
  • Freelance
    When working freelance, your client often seeks you out with a very fixed budget, timeline and set of deliverables in mind, i.e. “We need a new Logo” or “We need a landing page design.” Selling Discovery as part of the process can often be a challenge freelancers typically undertake since they mostly end up using their own time and even working overtime. But it doesn’t have to be like this. Clients can be willing to spend their time in the Discovery pre-project phase. However, you need to be confident to be able to sell yourself and defend your process. This video has some excellent tips on how to sell Discovery to clients as a freelancer.

Selling Design Discovery

As you can see from the above, selling Design Discovery can be a challenge depending on your context. It’s much harder to sell Design Discovery when working as a freelancer than it is working within an agency.

Some of the most commons excuses organizations put forward for discounting the research process are:

“We don’t have the budget.”

“We’ll find it out in BETA.”

“We don’t have time.”

“We already know what users want.”

When selling Design Discovery and combating these points of view, remember these key things:

It doesn’t have to be expensive.

Research does not have to be costly especially with all of the tools and resources we have available today. You can conduct a Guerilla User Testing session for the price of a basic coffee. Furthermore, you can often source willing participants from website intercepts, forums or social media groups who are more than willing to help.

It’s much harder to fix later.

The findings that come as an output from research can be invaluable. It is much more cost and time effective to spend some of the project budgets up front to ensure there are no assumptions and blind spots than it is to course correct later on if the project has shifted off tangent. Uncovering blockers or significant pain points later into the project can be a huge drain on time as well as monetary resources.

Organizational views can often be biased.

Within large organizations especially, a view of ‘what users want’ is often shaped by senior managers’ thoughts and opinions rather than any applied user research. These viewpoints then cascade down to more junior members of the team who start to adopt the same viewpoints. Validating these opinions are actually correct viewpoints is essential.

There are other cross-company benefits.

Furthermore, a Discovery process also brings with it internal benefits. By bringing members from other business functions together and setting a clear direction for the project, you should win advocates for the project across many business functions. Everyone should leave the room with a clear understanding of what the project is, its vision, and the problems you are trying to fix. This helps to alleviate an enormous amount of uncertainty within the organization.

I like to best explain the purpose of the discovery phase by using my adaptation of the Design Squiggle by Damien Newman:

See how the Discovery phase allows us time to tackle the most uncertainty?

Design Squiggle by Damien Newman An adaptation of the Design Squiggle by Damien Newman showing how uncertainty is reduced in projects over time. (Large preview)

Waterfall And Agile

A Discovery phase can be integrated into both Waterfall and Agile project management methodologies.

In Waterfall projects, the Discovery phase happens at the very start of the project and can typically run for 4 to 12 weeks depending on the size of the project, the number of interdependent systems, and the areas which need to be explored.

In Agile projects, you may run a Discovery phase upfront to outline the purpose for the project and interconnect systems along with mini 1 to 2-week discovery process at the start of each sprint to gather the information you need to build out a feature.

waterfall and agile discovery Discovery process can be easily incorporated into both waterfall and agile projects. (Large preview)

Final Thoughts

The next time you start on any digital project:

  • Make sure you allow time for a Discovery phase at the start of your project to define both business and user goals, and to set a clear vision that sets a clear purpose and direction for the project to all stakeholders.

  • Be sure to run a Stakeholder workshop with representatives from a variety of different business functions across the business (Marketing, Finance, Digital, Customer Services, Sales).

  • Before selecting which user research methods to use on your project, write down a list of questions you wish to understand and get a budget defined. From there, you can use the NNG matrix to help you understand what the best tool to use is.

Further Reading

If you found this article interesting, here is some recommended further reading:

Workshop Books

If you are interested in running Stakeholder workshops, I’d highly recommend reading the following books. Not only will they give you useful hints and tips on how to run workshops, they’re packed full of different workshop exercises to help you get answers to specific questions.

Smashing Editorial(cc, ra, yk, il)

May 16 2018

12:15

Managing SVG Interaction With The Pointer Events Property

Managing SVG Interaction With The Pointer Events Property

Managing SVG Interaction With The Pointer Events Property

Tiffany Brown
2018-05-16T14:15:25+02:002018-05-18T15:22:16+00:00

Try clicking or tapping the SVG image below. If you put your pointer in the right place (the shaded path) then you should have Smashing Magazine’s homepage open in a new browser tab. If you tried to click on some white space, you might be really confused instead.

See the Pen Amethyst by Tiffany Brown (@webinista) on CodePen.

This is the dilemma I faced during a recent project that included links within SVG images. Sometimes when I clicked the image, the link worked. Other times it didn’t. Confusing, right?

I turned to the SVG specification to learn more about what might be happening and whether SVG offers a fix. The answer: pointer-events.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

Not to be confused with DOM (Document Object Model) pointer events, pointer-events is both a CSS property and an SVG element attribute. With it, we can manage which parts of an SVG document or element can receive events from a pointing device such as a mouse, trackpad, or finger.

A note about terminology: "pointer events" is also the name of a device-agnostic, web platform feature for user input. However, in this article — and for the purposes of the pointer-events property — the phrase "pointer events" also includes mouse and touch events.

Outside Of The Box: SVG’s "Shape Model"

Using CSS with HTML imposes a box layout model on HTML. In the box layout model, every element generates a rectangle around its contents. That rectangle may be inline, inline-level, atomic inline-level, or block, but it’s still a rectangle with four right angles and four edges. When we add a link or an event listener to an element, the interactive area matches the dimensions of the rectangle.

Note: Adding a clip-path to an interactive element alters its interactive bounds. In other words, if you add a hexagonal clip-path path to an a element, only the points within the clipping path will be clickable. Similarly, adding a skew transformation will turn rectangles into rhomboids.

SVG does not have a box layout model. You see, when an SVG document is contained by an HTML document, within a CSS layout, the root SVG element adheres to the box layout model. Its child elements do not. As a result, most CSS layout-related properties don’t apply to SVG.

So instead, SVG has what I’ll call a ‘shape model’. When we add a link or an event listener to an SVG document or element, the interactive area will not necessarily be a rectangle. SVG elements do have a bounding box. The bounding box is defined as: the tightest fitting rectangle aligned with the axes of that element’s user coordinate system that entirely encloses it and its descendants. But initially, which parts of an SVG document are interactive depends on which parts are visible and/or painted.

Painted vs. Visible Elements

SVG elements can be “filled” but they can also be “stroked”. Fill refers to the interior of a shape. Stroke refers to its outline.

Together, “fill” and “stroke” are painting operations that render elements to the screen or page (also known as the canvas). When we talk about painted elements, we mean that the element has a fill and/or a stroke. Usually, this means the element is also visible.

However, an SVG element can be painted without being visible. This can happen if the visible attribute value or CSS property is hidden or when display is none. The element is there and occupies theoretical space. We just can’t see it (and assistive technology may not detect it).

Perhaps more confusingly, an element can also be visible — that is, have a computed visibility value of visible — without being painted. This happens when elements lack both a stroke and a fill.

Note: Color values with alpha transparency (e.g. rgba(0,0,0,0)) do not affect whether an element is painted or visible. In other words, if an element has an alpha transparent fill or stroke, it’s painted even if it can’t be seen.

Knowing when an element is painted, visible, or neither is crucial to understanding the impact of each pointer-events value.

All Or None Or Something In Between: The Values

pointer-events is both a CSS property and an SVG element attribute. Its initial value is auto, which means that only the painted and visible portions will receive pointer events. Most other values can be split into two groups:

  1. Values that require an element to be visible; and
  2. Values that do not.

painted, fill, stroke, and all fall into the latter category. Their visibility-dependent counterparts — visiblePainted, visibleFill, visibleStroke and visible — fall into the former.

The SVG 2.0 specification also defines a bounding-box value. When the value of pointer-events is bounding-box, the rectangular area around the element can also receive pointer events. As of this writing, only Chrome 65+ supports the bounding-box value.

none is also a valid value. It prevents the element and its children from receiving any pointer events. The pointer-events CSS property can be used with HTML elements too. But when used with HTML, only auto and none are valid values.

Since pointer-events values are better demonstrated than explained, let’s look at some demos.

Here we have a circle with a fill and a stroke applied. It’s both painted and visible. The entire circle can receive pointer events, but the area outside of the circle cannot.

See the Pen Visible vs painted in SVG by Tiffany Brown (@webinista) on CodePen.

Disable the fill, so that its value is none. Now if you try to hover, click, or tap the interior of the circle, nothing happens. But if you do the same for the stroke area, pointer events are still dispatched. Changing the fill value to none means that this area visible, but not painted.

Let’s make a small change to our markup. We’ll add pointer-events="visible" to our circle element, while keeping fill=none.

See the Pen How adding pointer-events: all affects interactivity by Tiffany Brown (@webinista) on CodePen.

Now the unpainted area encircled by the stroke can receive pointer events.

Augmenting The Clickable Area Of An SVG Image

Let’s return to the image from the beginning of this article. Our “amethyst” is a path element, as opposed to a group of polygons each with a stroke and fill. That means we can’t just add pointer-events="all" and call it a day.

Instead, we need to augment the click area. Let’s use what we know about painted and visible elements. In the example below, I’ve added a rectangle to our image markup.

See the Pen Augmenting the click area of an SVG image by Tiffany Brown (@webinista) on CodePen.

Even though this rectangle is unseen, it’s still technically visible (i.e. visibility: visible). Its lack of a fill, however, means that it is not painted. Our image looks the same. Indeed it still works the same — clicking white space still doesn’t trigger a navigation operation. We still need to add a pointer-events attribute to our a element. Using the visible or all values will work here.

See the Pen Augmenting the click area of an SVG image by Tiffany Brown (@webinista) on CodePen.

Now the entire image can receive pointer events.

Using bounding-box would eliminate the need for a phantom element. All points within the bounding box would receive pointer events, including the white space enclosed by the path. But again: pointer-events="bounding-box" isn’t widely supported. Until it is, we can use unpainted elements.

Using pointer-events When Mixing SVG And HTML

Another case where pointer-events may be helpful: using SVG inside of an HTML button.

See the Pen Ovxmmy by Tiffany Brown (@webinista) on CodePen.

In most browsers — Firefox and Internet Explorer 11 are exceptions here — the value of event.target will be an SVG element instead of our HTML button. Let’s add pointer-events="none" to our opening SVG tag.

See the Pen How pointer-events: none can be used with SVG and HTML by Tiffany Brown (@webinista) on CodePen.

Now when users click or tap our button, the event.target will refer to our button.

Those well-versed in the DOM and JavaScript will note that using the function keyword instead of an arrow function and this instead of event.target fixes this problem. Using pointer-events="none" (or pointer-events: none; in your CSS), however, means that you don’t have to commit that particular JavaScript quirk to memory.

Conclusion

SVG supports the same kind of interactivity we’re used to with HTML. We can use it to create charts that respond to clicks or taps. We can create linked areas that don’t adhere to the CSS and HTML box model. And with the addition of pointer-events, we can improve the way our SVG documents behave in response to user interaction.

Browser support for SVG pointer-events is robust. Every browser that supports SVG supports the property for SVG documents and elements. When used with HTML elements, support is slightly less robust. It isn’t available in Internet Explorer 10 or its predecessors, or any version of Opera Mini.

We’ve just scratched the surface of pointer-events in this piece. For a more in-depth, technical treatment, read through the SVG Specification. MDN (Mozilla Developer Network) Web Docs offers more web developer-friendly documentation for pointer-events, complete with examples.

Smashing Editorial(rb, ra, yk, il)

May 15 2018

11:00

Landing The Concept: Movie High-Concept Theory And UX Design

Landing The Concept: Movie High-Concept Theory And UX Design

Landing The Concept: Movie High-Concept Theory And UX Design

Andy Duke
2018-05-15T13:00:58+02:002018-05-18T15:22:16+00:00

Steven Spielberg once famously said, “If a person can tell me the idea in 25 words or less, it's going to make a pretty good movie.” He was referring to the notion that the best mass-appeal ‘blockbuster’ movies are able to succinctly state their concept or premise in a single short sentence, such as Jaws (“It’s about a shark terrorizing a small town”) and Toy Story (“It’s about some toys that come to life when nobody's looking”).

What if the same were true for websites? Do sites that explain their ‘concept’ in a simple way have a better shot at mass-appeal with users? If we look at the super simple layout of Google's homepage, for example, it gives users a single clear message about its concept equally as well as the Jaws movie poster:

Google homepage Google homepage: “It’s about letting you search for stuff.” (Large preview)

Being aware of the importance of ‘high-concept’ allows us — as designers — to really focus on user’s initial impressions. Taking the time to actually define what you want your simple ‘high-concept’ to be before you even begin designing can really help steer you towards the right user experience.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

What Does High-Concept Theory Mean For UX Design?

So let’s take this seriously and look at it from a UX Design standpoint. It stands to reason that if you can explain the ‘concept’ or purpose of your site in a simple way you are lowering the cognitive load on new users when they try and understand it and in doing so, you’re drastically increasing your chances of them engaging.

The parallels between ‘High-Concept’ theory and UX Design best practice are clear. Blockbuster audiences prefer simple easy to relate concepts presented in an uncomplicated way. Web users often prefer simpler, easy to digest, UI (User Interface) design, clean layouts, and no clutter.

Regardless of what your message is, presenting it in a simple way is critical to the success of your site’s user experience. But, what about the message itself? Understanding if your message is ‘high-concept’ enough might also be critical to the site’s success.

What Is The Concept Of ‘High-Concept’ In The Online World?

What do we mean when we say ‘high-concept’? For movies it’s simple — it’s what the film is about, the basic storyline that can be easy to put into a single sentence, e.g. Jurassic Park is “about a theme park where dinosaurs are brought back to life.”

When we look at ‘high-concept’ on a website, however, it can really apply to anything: a mission statement, a service offering, or even a new product line. It’s simply the primary message you want to share through your site. If we apply the theory of ‘high-concept’, it tells us that we need to ensure that we convey that message in a simple and succinct style.

What Happens If You Get It Right?

Why is ‘high-concept’ so important? What are the benefits of presenting a ‘high-concept’ UX Design? One of the mistakes we often fall foul of in UX Design is focussing in on the specifics of user tasks and forgetting about the critical importance of initial opinions. In other words, we focus on how users will interact with a site once they’ve chosen to engage with it and miss the decision-making process that comes before everything. Considering ‘high-concept’ allows us to focus on this initial stage.

The basic premise to consider is that we engage better with things we understand and things we feel comfortable with. Ensuring your site presents its message in a simple ‘high-concept’ way will aid initial user engagement. That initial engagement is the critical precursor to all the good stuff that follows: sales, interaction, and a better conversion rate.

How Much Concept Is Too Much Concept?

The real trick is figuring out how much complexity your users can comfortably handle when it comes to positioning your message. You need to focus initially on presenting only high-level information rather than bombarding users with everything upfront. Give users only the level of understanding they need to engage initially with your site and drive them deeper into the journey disclosing more detail as you go.

Netflix does a great job at this. The initial view new users are presented with on the homepage screen is upfront with its super high-concept — ‘we do video content’ once users have engaged with this premise they are taken further into the proposition — more information is disclosed, prices, process, and so on.

Netflix Netflix: “It lets you watch shows and movies anywhere.” (Large preview)

When To Land Your High-Concept?

As you decide how to layout the site, another critical factor to consider is when you choose to introduce your initial ‘high-concept’ to your users. It’s key to remember how rare it is that users follow a nice simple linear journey through your site starting at the homepage. The reality is that organic user journeys sometimes start with search results. As a result, the actual interaction with your site begins on the page that’s most relevant to the user’s query. With this in mind, it’s critical to consider how the premise of your site appears to users on key entry pages for your site wherever they appear in the overall hierarchy.

Another key point to consider when introducing the message of your site is that in many scenarios users will be judging whether to engage with you way before they even reach your site. If the first time you present your concept to users is via a Facebook ad or an email campaign, then implementation is drastically different. However, the theory should be the same, i.e. to ensure you present your message in that single sentence ‘high-concept’ style way with potential users.

How To Communicate Your High-Concept

Thus far, we’ve talked about how aiming for ‘high-concept’ messages can increase engagement — but how do we do this? Firstly, let’s focus on the obvious methods such as the wording you use (or don’t use).

Before you even begin designing, sit down and focus in on what you want the premise of your site to be. From there, draw out your straplines or headings to reflect that premise. Make sure you rely on content hierarchy though, use your headings to land the concept, and don’t bury messages that are critical to understanding deep in your body copy.

Here’s a nice example from Spotify. They achieve a ‘high-concept’ way of positioning their service through a simple, uncluttered combination of imagery and wording:

Spotify Spotify: “It lets you listen to loads of music.” (Large preview)

Single Sentence Wording

It’s key to be as succinct as possible: the shorter your message is, the more readable it becomes. The true balancing act comes in deciding where to draw the line between too little to give enough understanding and too much to make it easily readable.

If we take the example of Google Drive — it’s a relatively complex service, but it’s presented in a very basic high-concept way — initially a single sentence that suggests security and simplicity:

Google Drive

Then the next level of site lands just a little more of the concept of the service but still keeping in a simple single sentence under 25 words (Spielberg would be pleased):

Google Drive Google Drive: “A place where you can safely store your files online.” (Large preview)

Explainer Videos

It doesn’t just stop with your wording as there is a myriad of other elements on the page that you can leverage to land your concept. The explainer video is used to great effect by Amazon to introduce users to the concept of Amazon Go. In reality, it’s a highly complex technical trial of machine learning, computer visual recognition, and AI (artificial intelligence) to reimagine the shopping experience. As it’s simply framed on the site, it can be explained in a ‘high-concept’ way.

Amazon gives users a single sentence and also, crucially, makes the whole header section a simple explainer video about the service.

Amazon Go: “A real life shop with no checkouts.” (Large preview)

Imagery

The imagery you use can be used to quickly and simply convey powerful messages about your concept without the need to complicate your UI with other elements. Save the Children use imagery to great effect to quickly show the users the critical importance of their work arguably better than they ever could with wording.

Save the children… “They’re a charity that helps children.” (Large preview)

Font And Color

It’s key to consider every element of your site as a potential mechanism for helping you communicate your purpose to your users, through the font or the color choices. For example, rather than having to explicitly tell users that your site is aimed at academics or children you can craft your UI to help show that.

Users have existing mental models that you can appeal to. For example, bright colors and childlike fonts suggest the site is aimed at children, serif fonts and limited color use often suggest a much more serious or academic subject matter. Therefore, when it comes to landing the concept of your site, consider these as important allies to communicate with your users without having to complicate your message.

Legoland: “A big Lego theme park for kids.” (Large preview)

Design Affordance

So far, we’ve focused primarily on using messaging to communicate the concept to users. Still, what if the primary goal of your page is just to get users to interact with a specific element? For example, if you offer some kind of tool? If that’s the case, then showing the interface of this tool itself is often the best way to communicate its purpose to users.

This ties in with the concept of ‘Design Affordance’ — the idea that the form of a design should communicate its purpose. It stands to reason that sometimes the best way to tell users about your simple tool with an easy to use interface — is to show them that interface.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

If we look at Airbnb, a large part of the Airbnb concept is the online tool that allows the searching and viewing of results; they use this to great effect on this landing page design by showing the data entry view for that search. Showing users how easy it is to search while also presenting them the with simple messaging about the Airbnb concept.

Airbnb Airbnb: “It let’s you rent people’s homes for trips.” (Large preview)

How To Test You’ve Landed It

Now that you’ve designed your site and you’re happy that it pitches its concept almost as well as an 80s blockbuster — but how can you validate that? It would be lovely to check things over with a few rounds of in-depth lab-based user research, but in reality, you’ll seldom have the opportunity, and you’ll find yourself relying on more ‘guerilla’ methods.

One of the simplest and most effective methodologies to check how ‘high-concept’ your site is is the ‘5 second’ or ‘glance’ test. The simple test involves showing someone the site for 5 seconds and then hiding it from view. Then, users can then be asked questions about what they can recall about the site. The idea being that in 5 seconds they only have the opportunity to view what is immediately obvious.

Here are some examples of questions to ask to get a sense of how well the concept of your site comes across:

  • Can you remember the name of the site you just saw?
  • What do you think is the purpose of the page you just saw?
  • Was it obvious what the site you just saw offers?
  • Do you think you would use the site you just saw?

Using this test with a decent number of people who match your target users should give some really valuable insight into how well your design conveys the purpose of your site and if indeed you’ve managed to achieve ‘high-concept’.

Putting It All Into Practice

Let’s try implementing all this knowledge in the real world? In terms of taking this and turning it into a practical approach, I try and follow these simple steps for every project:

  1. Aim For High-Concept
    When you’re establishing the purpose of any new site (or page or ad) try and boil it down to a single, simple, overarching ‘High-Concept.’
  2. Write It Down
    Document what you want that key concept to be in 25 words or less.
  3. Refer Back
    Constantly refer back to that concept throughout the design process. From picking your fonts and colors to crafting your headline content — ensure that it all supports that High-Concept you wrote down.
  4. Test It
    Once complete use the 5-second test on your design with a number of users and compare their initial thoughts to your initial High-Concept. If they correlate, then great, if not head back to step 3 and try again.

In this article, we have discussed the simple rule of making blockbuster movies, and we have applied that wisdom to web design. No ‘shock plot twist’ — just some common sense. The first time someone comes into contact with your website, it’s vital to think about what you want the initial message to be. If you want mass market appeal, then craft it into a ‘high-concept’ message that Spielberg himself would be proud of!

Smashing Editorial(ah, ra, yk, il)
07:03
Looking for great WP Plugins for your website? They are in this article

May 14 2018

11:30

A Strategy Guide To CSS Custom Properties

A Strategy Guide To CSS Custom Properties

A Strategy Guide To CSS Custom Properties

Michael Riethmuller
2018-05-14T13:30:38+02:002018-05-18T15:22:16+00:00

CSS Custom Properties (sometimes known as ‘CSS variables’) are now supported in all modern browsers, and people are starting to use them in production. This is great, but they’re different from variables in preprocessors, and I’ve already seen many examples of people using them without considering what advantages they offer.

Custom properties have a huge potential to change how we write and structure CSS and to a lesser extent, how we use JavaScript to interact with UI components. I’m not going to focus on the syntax and how they work (for that I recommend you read “It’s Time To Start Using Custom Properties”). Instead, I want to take a deeper look at strategies for getting the most out of CSS Custom Properties.

How Are They Similar To Variables In Preprocessors?

Custom Properties are a little bit like variables in preprocessors but have some important differences. The first and most obvious difference is the syntax.

With SCSS we use a dollar symbol to denote a variable:

$smashing-red: #d33a2c;

In Less we use an @ symbol:

@smashing-red: #d33a2c;

Custom properties follow a similar conventions and use a -- prefix:

:root { --smashing-red: #d33a2c; }
.smashing-text { 
  color: var(--smashing-red);
}

One important difference between custom properties and variables in preprocessors is that custom properties have a different syntax for assigning a value and retrieving that value. When retrieving the value of a custom property we use the var() function.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

The next most obvious difference is in the name. They are called ‘custom properties’ because they really are CSS properties. In preprocessors, you can declare and use variables almost anywhere, including outside declaration blocks, in media rules, or even as part of a selector.

$breakpoint: 800px;
$smashing-red: #d33a2c;
$smashing-things: ".smashing-text, .cats";

@media screen and (min-width: $breakpoint) {
  #{$smashing-things} {
    color: $smashing-red;
  }
}

Most of the examples above would be invalid using custom properties.

Custom properties have the same rules about where they can be used as normal CSS properties. It’s far better to think of them as dynamic properties than variables. That means they can only be used inside a declaration block, or in other words, custom properties are tied to a selector. This can be the :root selector, or any other valid selector.

:root { --smashing-red: #d33a2c; }

@media screen and (min-width: 800px) {
  .smashing-text, .cats {
    --margin-left:  1em;
  }
}

You can retrieve the value of a custom property anywhere you would otherwise use a value in a property declaration. This means they can be used as a single value, as part of a shorthand statement or even inside calc() equations.

.smashing-text, .cats {
  color: var(--smashing-red);
  margin: 0 var(--margin-horizontal);
  padding: calc(var(--margin-horizontal) / 2)
}

However, they cannot be used in media queries, or selectors including :nth-child().

There is probably a lot more you want to know about the syntax and how custom properties work, such as how to use fallback values and can you assign variables to other variables (yes), but this basic introduction should be enough to understand the rest of the concepts in this article. For more information on the specifics of how custom properties work, you can read “It’s Time To Start Using Custom Properties” written by Serg Hospodarets.

Dynamic vs. Static

Cosmetic differences aside, the most significant difference between variables in preprocessors and custom properties is how they are scoped. We can refer to variables as either statically or dynamically scoped. Variables in preprocessors are static whereas custom properties are dynamic.

Where CSS is concerned static means that you can update the value of a variable at different points in the compilation process, but this cannot change the value of the code that came before it.

$background: blue;
.blue {
  background: $background;
}
$background: red;
.red {
  background: $background;
}

results in:

.blue {
  background: blue;
}
.red {
  background: red;
}

Once this is rendered to CSS, the variables are gone. This means that we could potentially read an .scss file and determine it’s output without knowing anything about the HTML, browser or other inputs. This is not the case with custom properties.

Preprocessors do have a kind of “block scope” where variables can be temporarily changed inside a selector, function or mixin. This changes the value of a variable inside the block, but it’s still static. This is tied to the block, not the selector. In the example below, the variable $background is changed inside the .example block. It changes back to the initial value outside the block, even if we use the same selector.

$background: red;
.example {
  $background: blue;
  background: $background;
}

.example {
  background: $background;
}

This will result in:

.example {
  background: blue;
}
.example {
  background: red;
}

Custom properties work differently. Where custom properties are concerned, dynamically scoped means they are subject to inheritance and the cascade. The property is tied to a selector and if the value changes, this affects all matching DOM elements just like any other CSS property.

This is great because you can change the value of a custom property inside a media query, with a pseudo selector such as hover, or even with JavaScript.

a {
  --link-color: black;
}
a:hover,
a:focus {
  --link-color: tomato;
}
@media screen and (min-width: 600px) {
  a {
    --link-color: blue;
  }
}

a {
  color: var(--link-color);
}

We don’t have to change where the custom property is used — we change the value of the custom property with CSS. This means using the same custom property, we can have different values in different places or context on the same page.

Global vs. Local

In addition to being static or dynamic, variables can also be either global or local. If you write JavaScript, you will be familiar with this. Variables can either be applied to everything inside an application, or their scope can be limited to specific functions or blocks of code.

CSS is similar. We have some things that are applied globally and some things that are more local. Brand colors, vertical spacing, and typography are all examples of things you might want to be applied globally and consistently across your website or application. We also have local things. For example, a button component might have a small and large variant. You wouldn’t want the sizes from these buttons to be applied to all input elements or even every element on the page.

This is something we are familiar with in CSS. We’ve developed design systems, naming conventions and JavaScript libraries, all to help with isolating local components and global design elements. Custom properties provide new options for dealing with this old problem.

CSS Custom Properties are by default locally scoped to the specific selectors we apply them to. So they are kinda like local variables. However, custom properties are also inherited, so in many situations they behave like global variables — especially when applied to the :root selector. This means that we need to be thoughtful about how to use them.

So many examples show custom properties being applied to the :root element and although, this fine for a demo, it can result in a messy global scope and unintended issues with inheritance. Luckily, we’ve already learned these lessons.

Global Variables Tend To Be Static

There are a few small exceptions, but generally speaking, most global things in CSS are also static.

Global variables like brand colors, typography and spacing don’t tend to change much from one component to the next. When they do change this tends to be a global rebranding or some other significant change that rarely happens on a mature product. It still makes sense for these things to be variables, they are used in many places, and variables help with consistency. But it doesn’t make sense for them to be dynamic. The value of these variables does not change in any dynamic way.

For this reason, I strongly recommend using preprocessors for global (static) variables. This not only ensures that they are always static, but it visually denotes them within the code. This can make CSS a whole lot more readable and easier to maintain.

Local Static Variables Are OK (Sometimes)

You might think given the strong stance on global variables being static, that by reflection, all local variables might need to be dynamic. While it’s true that local variables do tend to be dynamic, this is nowhere near as strong as the tendency for a global variable to be static.

Locally static variables are perfectly OK in many situations. I use preprocessors variables in component files mostly as a developer convenience.

Consider the classic example of a button component with multiple size variations.

buttons

My scss might look something like this:

$button-sml: 1em;
$button-med: 1.5em;
$button-lrg: 2em;

.btn {
  // Visual styles
}

.btn-sml {
  font-size: $button-sml;
}

.btn-med {
  font-size: $button-med;
}

.btn-lrg {
  font-size: $button-lrg;
}

Obviously, this example would make more sense if I was using the variables multiple times or deriving margin and padding values from the size variables. However, the ability to quickly prototype different sizes might be a sufficient reason.

Because most static variables are global, I like to differentiate static variables that are used only inside a component. To do this, you can prefix these variables with the component name, or you could use another prefix such as c-variable-name for component or l-variable-name for local. You can use whatever prefix you want, or you can prefix global variables. Whatever you choose, it’s helpful to differentiate especially if converting an existing codebase to use custom properties.

When To Use Custom Properties

If it is alright to use static variables inside components, when should we use custom properties? Converting existing preprocessor variables to custom properties usually makes little sense. After all, the reason for custom properties is completely different. Custom properties make sense when we have CSS properties that change relative to a condition in the DOM — especially a dynamic condition such as :focus, :hover, media queries or with JavaScript.

I suspect we will always use some form of static variables, although we might need fewer in future, as custom properties offer new ways to organise logic and code. Until then, I think in most situations we are going to be working with a combination of preprocessor variables and custom properties.

It’s helpful to know that we can assign static variables to custom properties. Whether they are global or local, it makes sense in many situations to convert static variables, to locally dynamic custom properties.

Note: Did you know that $var is valid value for a custom property? Recent versions of Sass recognize this, and therefore we need to interpolate variables assigned to custom properties, like this: #{$var}. This tells Sass you want to output the value of the variable, rather than just $var in the stylesheet. This is only needed for situations like custom properties, where a variable names can also be a valid CSS.

If we take the button example above and decide all buttons should use the small variation on mobile devices, regardless of the class applied in the HTML, this is now a more dynamic situation. For this, we should use custom properties.

$button-sml: 1em;
$button-med: 1.5em;
$button-lrg: 2em;

.btn {
  --button-size: #{$button-sml};
}

@media screen and (min-width: 600px) {
  .btn-med {
    --button-size: #{$button-med};
  }
  .btn-lrg {
    --button-size: #{$button-lrg};
  }
}

.btn {
  font-size: var(--button-size);
}

Here I create a single custom property: --button-size. This custom property is initially scoped to all button elements using the btn class. I then change the value of --button-size above 600px for the classes btn-med and btn-lrg. Finally, I apply this custom property to all button elements in one place.

Don’t Be Too Clever

The dynamic nature of custom properties allows us to create some clever and complicated components.

With the introduction of preprocessors, many of us created libraries with clever abstractions using mixins and custom functions. In limited cases, examples like this are still useful today, but for the most part, the longer I work with preprocessors the fewer features I use. Today, I use preprocessors almost exclusively for static variables.

Custom properties will not (and should not) be immune from this type of experimentation, and I look forward to seeing many clever examples. But in the long run, readable and maintainable code will always win over clever abstractions (at least in production).

I read an excellent article on this topic on the Free Code Camp Medium recently. It was written by Bill Sourour and is called “Don’t Do It At Runtime. Do It At Design Time.” Rather than paraphrasing his arguments, I’ll let you read it.

One key difference between preprocessor variables and custom properties is that custom properties work at runtime. This means things that might have been borderline acceptable, in terms of complexity, with preprocessors might not be a good idea with custom properties.

One example that illustrated this for me recently was this:

:root {
  --font-scale: 1.2;
  --font-size-1: calc(var(--font-scale) * var(--font-size-2));
  --font-size-2: calc(var(--font-scale) * var(--font-size-3)); 
  --font-size-3: calc(var(--font-scale) * var(--font-size-4));   
  --font-size-4: 1rem;     
}

This generates a modular scale. A modular scale is a series of numbers that relate to each other using a ratio. They are often used in web design and development to set font-sizes or spacing.

In this example, each custom property is determined using calc(), by taking the value of the previous custom property and multiplying this by the ratio. Doing this, we can get the next number in the scale.

This means the ratios are calculated at run-time and you can change them by updating only the value of the --font-scale property. For example:

@media screen and (min-width: 800px) {
  :root {
    --font-scale: 1.33;
  }
}

This is clever, concise and much quicker than calculating all the values again should you want to change the scale. It’s also something I would not do in production code.

Although the above example is useful for prototyping, in production, I’d much prefer to see something like this:

:root {
  --font-size-1: 1.728rem;
  --font-size-2: 1.44rem;
  --font-size-3: 1.2em;
  --font-size-4: 1em;
}

@media screen and (min-width: 800px) {
  :root {
    --font-size-1: 2.369rem; 
    --font-size-2: 1.777rem;     
    --font-size-3: 1.333rem; 
    --font-size-4: 1rem;     
  }
}

Similar to the example in Bill’s article, I find it helpful to see what the actual values are. We read code many more times than we write it and global values such as font scales change infrequently in production.

The above example is still not perfect. It violates the rule from earlier that global values should be static. I’d much prefer to use preprocessor variables and convert them to locally dynamic custom properties using the techniques demonstrated earlier.

It is also important to avoid situations where we go from using one custom property to a different custom property. This can happen when we name properties like this.

Change The Value Not The Variable

Change the value not the variable is one of the most important strategies for using custom properties effectively.

As a general rule, you should never change which custom property is used for any single purpose. It’s easy to do because this is exactly how we do things with preprocessors, but it makes little sense with custom properties.

In this example, we have two custom properties that are used on an example component. I switch from using the value of --font-size-small to --font-size-large depending on the screen size.

:root {
  --font-size-small: 1.2em;
  --font-size-large: 2em;            
}
.example {
  font-size: var(--font-size-small);
}
@media screen and (min-width: 800px) {
  .example {
    font-size: var(--font-size-large);
  }
}

A better way to do this would be to define a single custom property scoped to the component. Then using a media query, or any other selector, change its value.

.example {
  --example-font-size: 1.2em;
}
@media screen and (min-width: 800px) {                             
  .example {
    --example-font-size: 2em;            
  }
}

Finally, in a single place, I use the value of this custom property:

.example {
  font-size: var(--example-font-size);
}

In this example and others before it, media queries have only been used to change the value of custom properties. You might also notice there is only one place where the var() statement is used, and regular CSS properties are updated.

This separation between variable declarations and property declarations is intentional. There are many reasons for this, but the benefits are most obvious when thinking about responsive design.

Responsive Design With Custom Properties

One of the difficulties with responsive design when it relies heavily on media queries is that the no matter how you organize your CSS, styles relating to a particular component become fragmented across the stylesheet.

It can be very difficult to know what CSS properties are going to change. Still, CSS Custom Properties can help us organize some of the logic related to responsive design and make working with media queries a lot easier.

If It Changes It’s A Variable

Properties that change using media queries are inherently dynamic and custom properties provide the means to express dynamic values in CSS. This means that if you are using a media query to change any CSS property, you should place this value in a custom property.

You can then move this, along with all the media rules, hover states or any dynamic selectors that define how the value changes, to the top of the document.

Separate Logic From Design

When done correctly, separation of logic and design means that media queries are only used to change the value of custom properties. It means all the logic related to responsive design should be at the top of the document, and wherever we see a var() statement in our CSS, we immediately know that this property that changes. With traditional methods of writing CSS, there was no way of knowing this at a glance.

Many of us got very good at reading and interpreting CSS at a glance while tracking in our head which properties changed in different situations. I’m tired of this, and I don’t want to do this anymore! Custom properties now provide a link between logic and its implementation, so we don’t need to track this, and that is incredibly useful!

The Logic Fold

The idea of declaring variables at the top of a document or function is not a new idea. It’s something we do in most languages, and it’s now something we can do in CSS as well. Writing CSS in this way creates a clear visual distinction between CSS at the top of the document and below. I need a way to differentiate these sections when I talk about them and the idea of a “logic fold” is a metaphor I’ve started using.
Above the fold contains all preprocessor variables and custom properties. This includes all the different values a custom property can have. It should be easy to trace how a custom property changes.

CSS below the fold is straightforward and highly declarative and easy to read. It feels like CSS before media queries and other necessary complexities of modern CSS.

Take a look at a really simple example of a six column flexbox grid system:

.row {
  --row-display: block;
}
@media screen and (min-width: 600px) {
  .row {
    --row-display: flex;
  }
}

The --row-display custom property is initially set to block. Above 600px the display mode is set to flex.

Below the fold might look like this:

.row {
  display: var(--row-display);
  flex-direction: row;
  flex-wrap: nowrap;
}
.col-1, .col-2, .col-3,
.col-4, .col-5, .col-6 {
  flex-grow: 0;
  flex-shrink: 0;
}
.col-1 { flex-basis: 16.66%; }
.col-2 { flex-basis: 33.33%; }
.col-3 { flex-basis: 50%; }
.col-4 { flex-basis: 66.66%; }
.col-5 { flex-basis: 83.33%; }
.col-6 { flex-basis: 100%; }

We immediately know --row-display is a value that changes. Initially, it will be block, so the flex values will be ignored.

This example is fairly simple, but if we expanded it to include a flexible width column that fills the remaining space, it’s likely flex-grow, flex-shrink and flex-basis values would need to be converted to custom properties. You can try this or take a look at a more detailed example here.

Custom Properties For Theming

I’ve mostly argued against using custom properties for global dynamic variables and hopefully implied that attaching custom properties to the :root selector is in many cases considered harmful. But every rule has an exception, and for custom properties, it’s theming.

Limited use of global custom properties can make theming a whole lot easier.

Theming generally refers to letting users customize the UI in some way. This could be something like changing colors on a profile page. Or it might be something more localized. For example, you can choose the color of a note in the Google Keep application.

Google Keep App

Theming usually involves compiling a separate stylesheet to override a default value with user preferences, or compiling a different stylesheet for each user. Both of these can be difficult and have an impact on performance.

With custom properties, we don’t need to compile a different stylesheet; we only need to update the value of properties according to the user’s preferences. Since they are inherited values, if we do this on the root element they can be used anywhere in our application.

Capitalize Global Dynamic Properties

Custom properties are case sensitive and since most custom properties will be local, if you are using global dynamic properties, it can make sense to capitalize them.

:root {
  --THEME-COLOR: var(--user-theme-color, #d33a2c);            
}

Capitalization of variables often signifies global constants. For us, this is going to signify that the property is set elsewhere in the application and that we should probably not change it locally.

Avoid Directly Setting Global Dynamic Properties

Custom properties accept a fallback value. It can be a useful to avoid directly overwriting the value of a global custom properties and keep user values separate. We can use the fallback value to do this.

The example above sets the value of --THEME-COLOR to the value of --user-theme-color if it exists. If --user-theme-color is not set, the value of #d33a2c will be used. This way, we don’t need to provide a fallback every time we use --THEME-COLOR.

You might expect in the example below that the background will be set to green. However, the value of --user-theme-color has not been set on the root element, so the value of --THEME-COLOR has not changed.

:root {
  --THEME-COLOR: var(--user-theme-color, #d33a2c);            
}
body {
  --user-theme-color: green;
  background: var(--THEME-COLOR);
}

Indirectly setting global dynamic properties like this protects them from being overwritten locally and ensures user settings are always inherited from the root element. This is a useful convention to safeguard your theme values and avoid unintended inheritance.

If we do want to expose specific properties to inheritance, we can replace the :root selector with a * selector:

* {
  --THEME-COLOR: var(--user-theme-color, #d33a2c);            
}
body {
  --user-theme-color: green;
  background: var(--THEME-COLOR);
}

Now the value of --THEME-COLOR is recalculated for every element and therefore the local value of --user-theme-color can be used. In other words, the background color in this example will be green.

You can see some more detailed examples of this pattern in the section on Manipulating Color With Custom Properties.

Updating Custom Properties With JavaScript

If you want to set custom properties using JavaScript there is a fairly simple API and it looks like this:

const elm = document.documentElement;
elm.style.setProperty('--USER-THEME-COLOR', 'tomato');

Here I’m setting the value of --USER-THEME-COLOR on the document element, or in other words, the :root element where it will be inherited by all elements.

This is not a new API; it’s the same JavaScript method for updating styles on an element. These are inline styles so they will have a higher specificity than regular CSS.

This means it’s easy to apply local customizations:

.note {
  --note-color: #eaeaea;
}
.note {
  background: var(--note-color);
}

Here I set a default value for --note-color and scope this to the .note component. I keep the variable declaration separate from the property declaration, even in this simple example.

const elm = document.querySelector('#note-uid');
elm.style.setProperty('--note-color', 'yellow');

I then target a specific instance of a .note element and change the value of the --note-color custom property for that element only. This will now have higher specificity than the default value.

You can see how this works with this example using React. These user preferences could be saved in local storage or in the case of a larger application perhaps in a database.

Manipulating Color With Custom Properties

In addition to hex values and named colors, CSS has colors function such as rgb() and hsl(). These allow us to specify individual components of a color such as the hue or lightness. Custom properties can be used in conjunction with color functions.

:root {
  --hue: 25;
}
body {
  background: hsl(var(--hue), 80%, 50%);
}

This is useful, but some of the most widely used features of preprocessors are advanced color functions that allow us to manipulate color using functions like lighten, darken or desaturate:

darken($base-color, 10%);
lighten($base-color, 10%);
desaturate($base-color, 20%);

It would be useful to have some of these features in browsers. They are coming, but until we have native color modification functions in CSS, custom properties could fill some of that gap.

We’ve seen that custom properties can be used inside existing color functions like rgb() and hsl() but they can also be used in calc(). This means that we can convert a real number to a percentage by multiplying it, e.g. calc(50 * 1%) = 50%.

:root {
  --lightness: 50;
}
body {
  background: hsl(25, 80%, calc(var(--lightness) * 1%));
}

The reason we want to store the lightness value as a real number is so that we can manipulate it with calc before converting it to a percentage. For example, if I want to darken a color by 20%, I can multiply its lightness by 0.8. We can make this a little easier to read by separating the lightness calculation into a locally scoped custom property:

:root {
  --lightness: 50;
}
body {
  --lightness: calc(var(--lightness * 0.8));
  background: hsl(25, 80%, calc(var(--lightness) * 1%));
}

We could even abstract away more of the calculations and create something like color modification function in CSS using custom properties. This example is likely too complex for most practical cases of theming, but it demonstrates the full power of dynamic custom properties.

Simplify Theming

One of the advantages of using custom properties is the ability to simplify theming. The application doesn’t need to be aware of how custom properties are used. Instead, we use JavaScript or server-side code to set the value of custom properties. How these values are used is determined by the stylesheets.

This means once again that we are able to separate logic from design. If you have a technical design team, authors can update stylesheets and decide how to apply custom properties without changing a single line of JavaScript or backend code.

Custom properties also allow as to move some of the complexity of theming into the CSS and this complexity can have a negative impact on the maintainability of your CSS, so remember to keep it simple wherever possible.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Using Custom Properties Today

Even if you’re supporting IE10 and 11, you can start using custom properties today. Most of the examples in this article have to do with how we write and structure CSS. The benefits are significant in terms of maintainability, however, most of the examples only reduce what could otherwise be done with more complex code.

I use a tool called postcss-css-variables to convert most of the features of custom properties into a static representation of the same code. Other similar tools ignore custom properties inside media queries or complex selectors treating custom properties much like preprocessor variables.

What these tools cannot do is emulate the runtime features of custom properties. This means no dynamic features like theming or changing properties with JavaScript. This might be OK in many situations. Depending on the situation, UI customization might be considered a progressive enhancement and the default theme could be perfectly acceptable for older browsers.

Loading The Correct Stylesheet

There are many ways you can use postCSS. I use a gulp process to compile separate stylesheets for newer and older browsers. A simplified version of my gulp task looks like this:

import gulp from "gulp";
import sass from "gulp-sass";
import postcss from "gulp-postcss";
import rename from "gulp-rename";
import cssvariables from "postcss-css-variables";
import autoprefixer from "autoprefixer";
import cssnano from "cssnano";

gulp.task("css-no-vars", () =>
  gulp
    .src("./src/css/*.scss")
    .pipe(sass().on("error", sass.logError))
    .pipe(postcss([cssvariables(), cssnano()]))
    .pipe(rename({ extname: ".no-vars.css" }))
    .pipe(gulp.dest("./dist/css"))
);

gulp.task("css", () =>
  gulp
    .src("./src/css/*.scss")
    .pipe(sass().on("error", sass.logError))
    .pipe(postcss([cssnano()]))
    .pipe(rename({ extname: ".css" }))
    .pipe(gulp.dest("./dist/css"))
);

This results in two CSS files: a regular one with custom properties (styles.css) and one for older browsers (styles.no-vars.css). I want IE10 and 11 to be served styles.no-vars.css and other browsers to get the regular CSS file.

Normally, I’d advocate using feature queries but IE11 doesn’t support feature queries and we’ve used custom properties so extensively that serving a different stylesheet makes sense in this case.

Intelligently serving a different stylesheet and avoiding a flash of unstyled content is not a simple task. If you don’t need the dynamic features of custom properties, you could consider serving all browser styles.no-vars.css and using custom properties simply as a development tool.

If you want to take full advantage of all the dynamic features of custom properties, I suggest using a critical CSS technique. Following these techniques, the main stylesheet is loaded asynchronously while the critical CSS is rendered inline. Your page header might look something like this:

<head>
  <style> /* inlined critical CSS */ </style>
  <script> loadCSS('non-critical.css'); </script>
</head>

We can extend this to load either styles.css or styles.no-vars.css depending on whether the browser supports custom properties. We can detect support like this:

if ( window.CSS && CSS.supports('color', 'var(--test)') ) {
  loadCSS('styles.css');
} else {
  loadCSS('styles.no-vars.css');
}

Conclusion

If you’ve been struggling to organize CSS efficiently, have difficulty with responsive components, want to implement client-side theming, or just want to start off on the right foot with custom properties, this guide should tell you everything you need to know.

It comes down to understanding the difference between dynamic and static variables in CSS as well as a few simple rules:

  1. Separate logic from design;
  2. If a CSS property changes, consider using a custom property;
  3. Change the value of custom properties, not which custom property is used;
  4. Global variables are usually static.

If you follow these conventions, you will find that working with custom properties is a whole lot easier than you think. This might even change how you approach CSS in general.

Further Reading

Smashing Editorial(ra, yk, il)

May 11 2018

13:15

Building Mobile Apps Using React Native And WordPress

Building Mobile Apps Using React Native And WordPress

Building Mobile Apps Using React Native And WordPress

Muhammad Muhsin
2018-05-11T15:15:56+02:002018-05-18T15:22:16+00:00

As web developers, you might have thought that mobile app development calls for a fresh learning curve with another programming language. Perhaps Java and Swift need to be added to your skill set to hit the ground running with both iOS and Android, and that might bog you down.

But this article has you in for a surprise! We will look at building an e-commerce application for iOS and Android using the WooCommerce platform as our backend. This would be an ideal starting point for anyone willing to get into native cross-platform development.

A Brief History Of Cross-Platform Development

It’s 2011, and we see the beginning of hybrid mobile app development. Frameworks like Apache Cordova, PhoneGap, and Ionic Framework slowly emerge. Everything looks good, and web developers are eagerly coding away mobile apps with their existing knowledge.

However, mobile apps still looked like mobile versions of websites. No native designs like Android’s material design or iOS’s flat look. Navigation worked similar to the web and transitions were not buttery smooth. Users were not satisfied with apps built using the hybrid approach and dreamt of the native experience.

Fast forward to March 2015, and React Native appears on the scene. Developers are able to build truly native cross-platform applications using React, a favorite JavaScript library for many developers. They are now easily able to learn a small library on top of what they know with JavaScript. With this knowledge, developers are now targeting the web, iOS and Android.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

Furthermore, changes done to the code during development are loaded onto the testing devices almost instantly! This used to take several minutes when we had native development through other approaches. Developers are able to enjoy the instant feedback they used to love with web development.

React developers are more than happy to be able to use existing patterns they have followed into a new platform altogether. In fact, they are targeting two more platforms with what they already know very well.

This is all good for front-end development. But what choices do we have for back-end technology? Do we still have to learn a new language or framework?

The WordPress REST API

In late 2016, WordPress released the much awaited REST API to its core, and opened the doors for solutions with decoupled backends.

So, if you already have a WordPress and WooCommerce website and wish to retain exactly the same offerings and user profiles across your website and native app, this article is for you!

Assumptions Made In This Article

I will walk you through using your WordPress skill to build a mobile app with a WooCommerce store using React Native. The article assumes:

  • You are familiar with the different WordPress APIs, at least at a beginner level.
  • You are familiar with the basics of React.
  • You have a WordPress development server ready. I use Ubuntu with Apache.
  • You have an Android or an iOS device to test with Expo.

What We Will Build In This Tutorial

The project we are going to build through this article is a fashion store app. The app will have the following functionalities:

  • Shop page listing all products,
  • Single product page with details of the selected item,
  • ‘Add to cart’ feature,
  • ‘Show items in cart’ feature,
  • ‘Remove item from cart’ feature.

This article aims to inspire you to use this project as a starting point to build complex mobile apps using React Native.

Note: For the full application, you can visit my project on Github and clone it.

Getting Started With Our Project

We will begin building the app as per the official React Native documentation. Having installed Node on your development environment, open up the command prompt and type in the following command to install the Create React Native App globally.

npm install -g create-react-native-app

Next, we can create our project

create-react-native-app react-native-woocommerce-store

This will create a new React Native project which we can test with Expo.

Next, we will need to install the Expo app on our mobile device which we want to test. It is available for both iOS and Android.

On having installed the Expo app, we can run npm start on our development machine.

cd react-native-woocommerce-store

npm start
Starting a React Native project through the command line via Expo. (Large preview)

After that, you can scan the QR code through the Expo app or enter the given URL in the app’s search bar. This will run the basic ‘Hello World’ app in the mobile. We can now edit App.js to make instant changes to the app running on the phone.

Alternatively, you can run the app on an emulator. But for brevity and accuracy, we will cover running it on an actual device.

Next, let’s install all the required packages for the app using this command:

npm install -s axios react-native-htmlview react-navigation react-redux redux redux-thunk

Setting Up A WordPress Site

Since this article is about creating a React Native app, we will not go into details about creating a WordPress site. Please refer to this article on how to install WordPress on Ubuntu. As WooCommerce REST API requires HTTPS, please make sure it is set up using Let’s Encrypt. Please refer to this article for a how-to guide.

We are not creating a WordPress installation on localhost since we will be running the app on a mobile device, and also since HTTPS is needed.

Once WordPress and HTTPS are successfully set up, we can install the WooCommerce plugin on the site.

Installing the WooCommerce plugin to our WordPress installation. (Large preview)

After installing and activating the plugin, continue with the WooCommerce store setup by following the wizard. After the wizard is complete, click on ‘Return to dashboard.’

You will be greeted by another prompt.

Adding example products to WooCommerce. (Large preview)

Click on ‘Let’s go‘ to 'Add example products'. This will save us the time to create our own products to display in the app.

Constants File

To load our store’s products from the WooCommerce REST API, we need the relevant keys in place inside our app. For this purpose, we can have a constans.js file.

First create a folder called ‘src’ and create subfolders inside as follows:

Create the file ‘Constants.js’ within the constans folder. (Large preview)

Now, let’s generate the keys for WooCommerce. In the WordPress dashboard, navigate to WooCommerce → Settings → API → Keys/Apps and click on ‘Add Key.’

Next create a Read Only key with name React Native. Copy over the Consumer Key and Consumer Secret to the constants.js file as follows:

const Constants = {
   URL: {
wc: 'https://woocommerce-store.on-its-way.com/wp-json/wc/v2/'
   },
   Keys: {
ConsumerKey: 'CONSUMER_KEY_HERE',
ConsumerSecret: 'CONSUMER_SECRET_HERE'
   }
}
export default Constants;

Starting With React Navigation

React Navigation is a community solution to navigating between the different screens and is a standalone library. It allows developers to set up the screens of the React Native app with just a few lines of code.

There are different navigation methods within React Navigation:

  • Stack,
  • Switch,
  • Tabs,
  • Drawer,
  • and more.

For our Application we will use a combination of StackNavigation and DrawerNavigation to navigate between the different screens. StackNavigation is similar to how browser history works on the web. We are using this since it provides an interface for the header and the header navigation icons. It has push and pop similar to stacks in data structures. Push means we add a new screen to the top of the Navigation Stack. Pop removes a screen from the stack.

The code shows that the StackNavigation, in fact, houses the DrawerNavigation within itself. It also takes properties for the header style and header buttons. We are placing the navigation drawer button to the left and the shopping cart button to the right. The drawer button switches the drawer on and off whereas the cart button takes the user to the shopping cart screen.

const StackNavigation = StackNavigator({
 DrawerNavigation: { screen: DrawerNavigation }
}, {
   headerMode: 'float',
   navigationOptions: ({ navigation, screenProps }) => ({
     headerStyle: { backgroundColor: '#4C3E54' },
     headerTintColor: 'white',
     headerLeft: drawerButton(navigation),
     headerRight: cartButton(navigation, screenProps)
   })
 });

const drawerButton = (navigation) => (
 <Text
   style={{ padding: 15, color: 'white' }}
   onPress={() => {
     if (navigation.state.index === 0) {
       navigation.navigate('DrawerOpen')
     } else {
       navigation.navigate('DrawerClose')
     }
   }
   }> (
 <Text style={{ padding: 15, color: 'white' }}
   onPress={() => { navigation.navigate('CartPage') }}
 >
   <EvilIcons name="cart" size={30} />
   {screenProps.cartCount}
 </Text>
);

DrawerNavigation on the other hands provides for the side drawer which will allow us to navigate between Home, Shop, and Cart. The DrawerNavigator lists the different screens that the user can visit, namely Home page, Products page, Product page, and Cart page. It also has a property which will take the Drawer container: the sliding menu which opens up when clicking the hamburger menu.

const DrawerNavigation = DrawerNavigator({
 Home: {
   screen: HomePage,
   navigationOptions: {
     title: "RN WC Store"
   }
 },
 Products: {
   screen: Products,
   navigationOptions: {
     title: "Shop"
   }
 },
 Product: {
   screen: Product,
   navigationOptions: ({ navigation }) => ({
     title: navigation.state.params.product.name
   }),
 },
 CartPage: {
   screen: CartPage,
   navigationOptions: {
     title: "Cart"
   }
 }
}, {
   contentComponent: DrawerContainer
 });
# Left: The Home page (homepage.js). Right: The open drawer (DrawerContainer.js).

Injecting The Redux Store To App.js

Since we are using Redux in this app, we have to inject the store into our app. We do this with the help of the Provider component.

const store = configureStore();

class App extends React.Component {
 render() {
   return (
     <Provider store={store}>    
       <ConnectedApp />    
     </Provider>    
   )
 }
}

We will then have a ConnectedApp component so that we can have the cart count in the header.

class CA extends React.Component {
 render() {
   const cart = {
     cartCount: this.props.cart.length
   }
   return (
     <StackNavigation screenProps={cart} />
   );
 }
}

function mapStateToProps(state) {
 return {
   cart: state.cart
 };
}

const ConnectedApp = connect(mapStateToProps, null)(CA);

Redux Store, Actions, And Reducers

In Redux, we have three different parts:

  1. Store
    Holds the whole state of your entire application. The only way to change state is to dispatch an action to it.
  2. Actions
    A plain object that represents an intention to change the state.
  3. Reducers
    A function that accepts a state and an action type and returns a new state.

These three components of Redux help us achieve a predictable state for the entire app. For simplicity, we will look at how the products are fetched and saved in the Redux store.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

First of all, let’s look at the code for creating the store:

let middleware = [thunk];

export default function configureStore() {
    return createStore(
        RootReducer,
        applyMiddleware(...middleware)
    );
}

Next, the products action is responsible for fetching the products from the remote website.

export function getProducts() {
   return (dispatch) => {
       const url = `${Constants.URL.wc}products?per_page=100&consumer_key=${Constants.Keys.ConsumerKey}&consumer_secret=${Constants.Keys.ConsumerSecret}`
      
       return axios.get(url).then(response => {
           dispatch({
               type: types.GET_PRODUCTS_SUCCESS,
               products: response.data
           }
       )}).catch(err => {
           console.log(err.error);
       })
   };
}

The products reducer is responsible for returning the payload of data and whether it needs to be modified.

export default function (state = InitialState.products, action) {
    switch (action.type) {
        case types.GET_PRODUCTS_SUCCESS:
            return action.products;
        default:
            return state;
    }
}

Displaying The WooCommerce Shop

The products.js file is our Shop page. It basically displays the list of products from WooCommerce.

class ProductsList extends Component {

 componentDidMount() {
   this.props.ProductAction.getProducts(); 
 }

 _keyExtractor = (item, index) => item.id;

 render() {
   const { navigate } = this.props.navigation;
   const Items = (
     <FlatList contentContainerStyle={styles.list} numColumns={2}
       data={this.props.products || []} 
       keyExtractor={this._keyExtractor}
       renderItem={
         ({ item }) => (
           <TouchableHighlight style={{ width: '50%' }} onPress={() => navigate("Product", { product: item })} underlayColor="white">
             <View style={styles.view} >
               <Image style={styles.image} source={{ uri: item.images[0].src }} />
               <Text style={styles.text}>{item.name}</Text>
             </View>
           </TouchableHighlight>
         )
       }
     />
   );
   return (
     <ScrollView>
       {this.props.products.length ? Items :
         <View style={{ alignItems: 'center', justifyContent: 'center' }}>
           <Image style={styles.loader} source={LoadingAnimation} />
         </View>
       }
     </ScrollView>
   );
 }
}

this.props.ProductAction.getProducts() and this.props.products are possible because of mapStateToProps and mapDispatchToProps.

Products listing screen. (Large preview)

mapStateToProps and mapDispatchToProps

State is the Redux store and Dispatch is the actions we fire. Both of these will be exposed as props in the component.

function mapStateToProps(state) {
 return {
   products: state.products
 };
}
function mapDispatchToProps(dispatch) {
 return {
   ProductAction: bindActionCreators(ProductAction, dispatch)
 };
}
export default connect(mapStateToProps, mapDispatchToProps)(ProductsList);

Styles

In React, Native styles are generally defined on the same page. It’s similar to CSS, but we use camelCase properties instead of hyphenated properties.

const styles = StyleSheet.create({
 list: {
   flexDirection: 'column'
 },
 view: {
   padding: 10
 },
 loader: {
   width: 200,
   height: 200,
   alignItems: 'center',
   justifyContent: 'center',
 },
 image: {
   width: 150,
   height: 150
 },
 text: {
   textAlign: 'center',
   fontSize: 20,
   padding: 5
 }
});

Single Product Page

This page contains details of a selected product. It shows the user the name, price, and description of the product. It also has the ‘Add to cart’ function.

Single product page. (Large preview)

Cart Page

This screen shows the list of items in the cart. The action has the functions getCart, addToCart, and removeFromCart. The reducer handles the actions likewise. Identification of actions is done through actionTypes — constants which describe the action that are stored in a separate file.

export const GET_PRODUCTS_SUCCESS = 'GET_PRODUCTS_SUCCESS'
export const GET_PRODUCTS_FAILED = 'GET_PRODUCTS_FAILED';

export const GET_CART_SUCCESS = 'GET_CART_SUCCESS';
export const ADD_TO_CART_SUCCESS = 'ADD_TO_CART_SUCCESS';
export const REMOVE_FROM_CART_SUCCESS = 'REMOVE_FROM_CART_SUCCESS';

This is the code for the CartPage component:

class CartPage extends React.Component {

 componentDidMount() {
   this.props.CartAction.getCart();
 }

 _keyExtractor = (item, index) => item.id;

 removeItem(item) {
   this.props.CartAction.removeFromCart(item);
 }

 render() {
   const { cart } = this.props;
   console.log('render cart', cart)

   if (cart && cart.length > 0) {
     const Items = <FlatList contentContainerStyle={styles.list}
       data={cart}
       keyExtractor={this._keyExtractor}
       renderItem={({ item }) =>
         <View style={styles.lineItem} >
           <Image style={styles.image} source={{ uri: item.image }} />
           <Text style={styles.text}>{item.name}</Text>
           <Text style={styles.text}>{item.quantity}</Text>
           <TouchableOpacity style={{ marginLeft: 'auto' }} onPress={() => this.removeItem(item)}><Entypo name="cross" size={30} /></TouchableOpacity>
         </View>
       }
     />;
     return (
       <View style={styles.container}>
         {Items}
       </View>
     )
   } else {
     return (
       <View style={styles.container}>
         <Text>Cart is empty!</Text>
       </View>
     )
   }
 }
}

As you can see, we are using a FlatList to iterate through the cart items. It takes in an array and creates a list of items to be displayed on the screen.

# Left: The cart page when it has items in it. Right: The cart page when it is empty.

Conclusion

You can configure information about the app such as name and icon in the app.json file. The app can be published after npm installing exp.

To sum up:

  • We now have a decent e-commerce application with React Native;
  • Expo can be used to run the project on a smartphone;
  • Existing backend technologies such as WordPress can be used;
  • Redux can be used for managing the state of the entire app;
  • Web developers, especially React developers can leverage this knowledge to build bigger apps.

For the full application, you can visit my project on Github and clone it. Feel free to fork it and improve it further. As an exercise, you can continue building more features into the project such as:

  • Checkout page,
  • Authentication,
  • Storing the cart data in AsyncStorage so that closing the app does not clear the cart.
Smashing Editorial(da, lf, ra, yk, il)
10:30

Google I/O Developer Roundup: What’s New?

Google I/O Developer Roundup: What’s New?

Google I/O Developer Roundup: What’s New?

Rachel Andrew
2018-05-11T12:30:47+02:002018-05-18T15:22:16+00:00

The Google I/O keynote opened with an animation asking us to “Make Good Things Together,” and in this article, I’m going to round up some of the things announced in the Keynote and Developer Keynote, that are of interest to Smashing readers. The announcements in the keynote were backed up by sessions during the event, which were recorded. To help you use the things announced, I’ll be linking to the videos of those sessions plus any supporting material I’ve been able to find.

I would love to know which of these announcements you would like to find out more about — please do leave a comment below. Also, if you are an author with experience to share on any of these then why not drop us a line with an outline?

The Keynotes

The main announcements were all covered in the keynote presentations. If you want to watch all of the keynotes, you can find them on YouTube along with some condensed versions:

What if there was a web conference without... slides? Meet SmashingConf Toronto 2018 🇨🇦 with live sessions exploring how experts work behind the scenes. Dan Mall, Lea Verou, Sara Soueidan, Seb Lee-Delisle and many others. June 26–27. With everything from live designing to live performance audits.

Check the speakers →

Google I/O And The Web

I was attending Google I/O as a Web GDE (Google Developer Expert), and I/O typically has a lot of content which is more of interest to Android Developers. That said, there were plenty of announcements and useful sessions for me.

Slide saying Make the platform more powerful, make web development easier

The Web State of the Union session covered announcements and information regarding Lighthouse, PWAs, Polymer 3.0, Web Assembly and AMP. In addition to the video, you can find a write-up of this session on the Chromium Blog.

What’s New in Chrome DevTools covered all of the new features that are available or coming soon to DevTools.

Progressive Web Apps were a big story through the event, and if you have yet to build your first PWA, the PWA Starter Kit presentation can help you get started using Polymer. To look more deeply into Polymer, you could continue with Web Components and the Polymer Project: Polymer 3.0 and beyond. The Polymer site is now updated with the documentation for Polymer 3.0.

Angular wasn’t left out, watch the What’s New in Angular session for all the details.

Headless Chrome is a subject that has interested me lately, as I’m always looking for interesting ways to automate tasks. In the session The Power of Headless Chrome and Browser Automation, you can find out about using Headless Chrome and Puppeteer. If you are wondering what sort of things you could achieve, there are some examples of things you might like to do on GitHub.

Also, take a look at:

Android Developer News

I’m not an Android developer, but I was surrounded by people who are. I’ve tried to pick out some of the things that seemed most exciting to the crowd. The session, “What’s New In Android,” is a great place to go to find out all of the key announcements. The first of which is the fact that Android P Beta is now available, and many of the features announced will be available as part of that beta. You can check to see if your device is supported by the Beta here.

Android Jetpack is a set of libraries, tools, and architectural guidance to help make it quick and easy to build great Android apps. The IDEs are integrated with Android Studio, and this seems to be an attempt to streamline the developer experience of common tasks. You can find out more information about Android Jetpack in the session video on What’s New In Android Support Library.

The ability to create Actions in Apps is something that is now in Beta and enables developers to create interactions that cross from Voice to displays — be that your watch, phone or the new Smart Screens that will be introduced later this year.

Slices are an interactive snippet of an App UI, introduced in Android P. To find out more, take a look at this I/O Session from which you can learn how to build a slice and have it appear as suggestions in search results.

Having looked at a few specific announcements for the Web and Android, I’ll now take a look at some of the bigger themes covered at the event and how these might play out for developers.

Audience and stage 7,000 people attended Google I/O

Artificial Intelligence, Augmented Reality, And Machine Learning

As expected, the main keynote as well as the Developer keynote both had a strong AI, AR, and ML theme. This theme is part of many Google products and announcements. Google is leveraging the huge amount of data that they have collected in order to create some incredible products and services, many of which bring with them new concerns on privacy and consent as the digital and real world merge more closely.

Google Photos is getting new AI features which will help you improve your photographs, by giving suggestions as to how to fix brightness or offer suggested rotations.

A new version of Google News will use AI to present to users a range of coverage on stories they are interested in.

One of the demos that achieved a huge round of applause was when Google Lens was demonstrated being pointed at a section of text in a book, and that text was then able to be copied and pasted into the phone.

If you are interested in using AI then you might like to watch the session AIY: Do It Yourself Artificial Intelligence. Also,

Maps

When traveling, I know the all-too-common scenario of coming out of a train station with maps open and having no idea which direction I am facing and which street is which. Google is hoping to solve this issue with augmented reality, bringing street view photographs and directions to the screen to help you know which direction to start walking in.

Google Maps are also taking more of a slice of the area we might already use FourSquare or Yelp for, bring more recommendations based on places we have already visited or reviewed. In addition, a feature I can see myself using when trying to plan post-conference dinners, the ability to create a shortlist of places and share it with a group in order to select where to go. Android Central have an excellent post on all of the new maps features if you want to know more. These features will be available on the Android and iOS versions of the Google Maps app.

For developers, a roundup of the changes to the Maps API can be found in the session Google Maps Platform: Ready For Scale.

Introducing ML Kit

While many of us will find the features powered by Machine Learning useful as consumers of the apps that use them, if you are keen to use machine learning in your apps, then Google is trying to make that easier for you with ML Kit. ML Kit helps you to bring the power of machine learning to your apps with Google APIs. The five ready-to-go APIs are:

  • Text Recognition
  • Face Detection
  • Barcode Scanning
  • Image Labeling
  • Landmark Recognition

Two more APIs will be ready in the coming months: A smart reply API allowing you to support contextual messaging replies in your app, and a high-density face contour addition to the face detection API.

You can read more about ML Kit in this Google Developers post Introducing ML Kit and in the session video ML Kit: Machine Learning SDK For Mobile Developers.

Google Duplex

The most talked about demo of the keynote was Google Duplex, with a demo of Google Assistant having a conversation with a restaurant and hairdresser in order to make a reservation and book an appointment. The demo drew gasps from the crowd as the conversation was so natural, the person on the other end of the phone did not recognize they were not talking to a person.

It didn’t take long for people to move from “*That’s cool!*” to “*That’s scary!*” and there are obvious concerns about the ethics of a robot not declaring that it is not a real person when engaging with someone on the phone.

The recordings that were played during the keynote can be found in Ethan Marcotte’s post about the feature, in which he notes that “Duplex was elegantly, intentionally designed to deceive.” Jeremy Keith wisely points out that the people excited to try this technology are not imagining themselves as the person at the end of the phone.

In addition to Duplex, there were a number of announcements around Google Assistant including the ability to have continued conversation, a back-and-forth conversation that doesn’t require saying “Hey, Google” at the beginning of each phrase.

Accessibility

As a layperson, I can’t help but think that many of the things Google is working on could have hugely positive implications in terms of accessibility. Even the controversial Duplex could enable someone who can’t have a voice call to more easily deal with businesses only contactable by phone. One area where Google technology will soon have an impact is with the Android App Google Lookout which will help visually impaired users understand what is around them, by using the phone camera and giving spoken notifications to the user.

There were several sessions bringing a real focus on accessibility at I/O, including the chance for developers to have an accessibility review of their application. For web developers, Rob Dodson’s talk What’s New In Accessibility covers new features of DevTools to help us build more accessible sites, plus the Accessibility Object Model which gives more control over the accessibility of sites. For Android Developers What’s New In Android Accessibility details the features that will be part of Android P. With the focus on AR and VR, there was also a session on what we need to think about in this emerging area of technology: Accessibility For AR And VR.

Linux Apps Are Coming To Chrome OS

An interesting announcement was the fact that Linux Apps will be installable on Chrome OS, making a ChromeBook a far more interesting choice as a developer. According to VentureBeat, Google is using Debian Stretch, so you’ll be able to run apt and install any software there is a Debian package for. This would include things like Git, VS Code, and Android Studio.

Material Design

The material.io website has been updated for the new version of Material Design; the big announcement for that being Theming, which will allow developers using Material to create their own themes making their apps look a little less like a Google property. Gallery will then allow teams to share and collaborate on their designs.

Also announced was the Material Theme Editor which is a plugin for Sketch, making it Mac only. The website does say that it is “currently available for Sketch” so perhaps other versions will appear in due course.

You can find a write-up of how to create a Material theme on the material.io website. The design.google site is also a useful destination for Material and other Google design themes. From the sessions, you can watch:

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Digital Wellbeing

Announced at the keynote was the new Google Digital Wellbeing site, along with a suite of features in Android P, and also on YouTube aimed at helping people to disconnect from their devices and reduce stress caused by things such as alerts and notifications. You can explore all of the features at wellbeing.google/. Most of these will require Android P, currently in Beta, however, the YouTube features will be part of the Youtube app and therefore available to everyone.

As a developer, it is interesting to think about how we can implement similar features in our own applications, whether for web or mobile applications. Things like combining notifications into one daily alert, as will be enabled on Youtube, could help to prevent users being overloaded by alerts from you, and able to properly engage with them at a scheduled time. It has become easier and easier to constantly be asking our users to look at us, perhaps we should try to instead work with our users to be available when they need us, and quietly hide away when they are doing something else.

For more information on building a more humane technology ecosystem, explore the Center For Humane Technology website.

News Roundup

Every news site has been posting their own reviews of I/O, so I’ll wrap up with some of the best coverage I’ve seen. As an attendee of the event, I felt it was slickly managed, good fun, yet it was very clear that Google has well-rehearsed and clear messages they want to send to the developer communities who create apps and content. Every key announcement in the main keynotes was followed up by sessions diving into the practical details of how to use that technology in development. There was so much being announced and demonstrated that it is impossible to cover everything in this post — or even to have experienced it all at the event. I know that there are several videos on the I/O playlist that I’ll be watching after returning home.

  • TechCrunch has an excellent roundup, with individual articles on many of the big announcements,
  • There’s also a coverage of the event from CNET,
  • The Verge has a story stream of their content reporting on the announcements.

If you were at I/O or following along with the live stream, what announcements were most interesting to you? You can use the comments to share the things I didn’t cover that would be your highlights of the three days.

Smashing Editorial(il)

May 10 2018

12:25

Things Designers Should Know About SEO In 2018

Things Designers Should Know About SEO In 2018

Things Designers Should Know About SEO In 2018

Myriam Jessier
2018-05-10T14:25:45+02:002018-05-18T15:22:16+00:00

Design has a large impact on content visibility — so does SEO. However, there are some key SEO concepts that experts in the field struggle to communicate clearly to designers. This can create friction and the impression that most well-designed websites are very poorly optimized for SEO.

Here is an overview of what we will be covering in this article:

  • Design mobile first for Google,
  • Structure content for organic visibility,
  • Focus on user intent (not keywords),
  • Send the right signals with internal linking,
  • A crash course on image SEO,
  • Penalties for pop-ups,
  • Say it like you mean it: voice search and assistants.

Design Mobile First For Google

This year, Google plans on indexing websites mobile first:

Our algorithms will eventually primarily use the mobile version of a site’s content to rank pages from that site, to understand structured data, and to show snippets from those pages in our results.

So, How Does This Affect Websites In Terms Of Design?

Well, it means that your website should be responsive. Responsive design isn’t about making elements fit on various screens. It is about usability. This requires shifting your thinking towards designing a consistent, high-quality experience across multiple devices.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

Here are a few things that users care about when it comes to a website:

  • Flexible texts and images.
    People should be able to view images and read texts. No one likes looking at pixels hoping they morph into something readable or into an image.
  • Defined breakpoints for design changes (you can do that via CSS media queries).
  • Being able to use your website on all devices.
    This can mean being able to use your website in portrait or landscape mode without losing half of the features or having buttons that do not work.
  • A fluid site grid that aims to maintain proportions.

We won’t go into details about how to create a remarkable responsive website as this is not the main topic. However, if you want to take a deep dive into this fascinating subject, may I recommend a Smashing Book 5?

Do you need a concrete visual to help you understand why you must think about the mobile side of things from the get-go? Stéphanie Walter provided a great visual to get the point across:

Large preview

Crafting Content For Smaller Screens

Your content should be as responsive as your design. The first step to making content responsive for your users is to understand user behavior and preferences.

  • Content should be so riveting that users scroll to read more of it;
  • Stop thinking in terms of text. Animated gifs, videos, infographics are all very useful types of content that are very mobile-friendly;
  • Keep your headlines short enticing. You need to convince visitors to click on an article, and a wall of text won’t achieve that;
  • Different devices can sometimes mean different expectations or different user needs. Your content should reflect that.
SEO tip regarding responsive design:
  • Google offers a mobile-friendly testing tool. Careful though: This tool helps you meet Google’s design standards, but it doesn’t mean that your website is perfectly optimized for a mobile experience.
  • Test how the Google bot sees your website with the “Fetch and render” feature in Google Search Console. You can test desktop and mobile formats to see how a human user and Google bot will see your site.
In the left-hand navigation click on “crawl” and then “fetch as Google”. You can compare the rendered images to detect issues between user and bot displays. (Large preview)

Resources:

Google Crawling Scheme: Making The Bot Smarters

Search engines go about crawling a website in a certain way. We call that a ‘crawling scheme.’ Google has announced that it is retiring its old AJAX crawling scheme in Q2 of 2018. The new crawling scheme has evolved quite a lot: It can handle AJAX and JavaScript natively. This means that the bot can “see” more of your content that may have been hidden behind some code prior to the new crawling scheme.

For example, Google’s new mobile indexing will adjust the impact of content hidden in tabs (with JavaScript). Before this change, the best practice was to avoid hidden content at all costs as it wasn’t as effective for SEO (it was either too hard to crawl for the bot in some cases or given less important by Google in others).

Content Structure For Organic Visibility

SEO experts think of page organization in terms that are accessible for a search engine bot. This means that we look at a page design to quickly establish what is an H1, H2, and an H3 tag. Content organization should be meaningful. This means that it should act as a path that the bot can follow. If all of this sounds familiar to you, it may be due to the fact that content hierarchy is also used to improve accessibility. There are some slight differences between how SEO and accessibility use H tags:

  • SEO focuses on H1 through H3 tags whereas accessibility makes use of all H tags (H1 through H6).
  • SEO experts recommend using a single H1 tag per page whereas accessibility handles multiple H1 tags per page. Although Google has said in the past that it accepts multiple H1 tags on a page, years of experience have shown that a single H1 tag is better to help you rank.

SEO experts investigate content structure by displaying the headings on a page. You do the same type of check quickly by using the Web Developer Toolbar extension (available on Chrome and Firefox) by Chris Pederick. If you go into the information section and click on “View Document Outline,” a tab with the content hierarchy will open in your browser.

Large preview

So, if you head on over to The Design School Guide To Visual Hierarchy, you will see a page, and if you open the document hierarchy tab, you will see something quite different.

Large preview Large preview

Bonus: If the content structure of your pages is easy to understand and geared towards common user queries, then Google may show it in “position zero” (a result that shows a content snippet above the first results).

You can see how this can help you increase your overall visibility in search engine result pages below:

Position zero example courtesy of Google.com. (Large preview)

SEO Tip To Get Content Hierarchy Right

Content hierarchy should not include sidebars, headers or footer. Why? Because if we are talking about a chocolate recipe and the first thing you present to the robot is content from your sidebar touting a signup form for your newsletter, it’s falling short of user expectations (hint: unless a newsletter signup promises a slice of chocolate cake for dinner, you are about to have very disappointed users).

If we go back to the Canva page, you can see that “related articles” and other H tags should not be part of the content hierarchy of this page as they do not reflect the content of this specific page. Although HTML5 standards recommend using H tags for sidebars, headers, and footers, it’s not very compatible with SEO.

Content Quantity Shifts: Long Form Content Is On The Rise

Creating flagship content is important to rank in Google. In copywriting terms, this type of content is often part of a cornerstone page. It can take the shape of a tutorial, an FAQ page, but cornerstone content is the foundation to a well-ranked website. As such, it is a prized asset for inbound marketing to attract visits, backlinks and position a brand in a niche.

In the olden days, 400-word pages were considered to be “long form” content to rank in Google. Today, long-form content that is 1000, 2000 or even 3000 words long outranks short form content very often. This means that you need to start planning and designing to make long-form content engaging and scrollable. Design interactions should be aesthetically pleasing and create a consistent experience even for mammoth content like cornerstone pages. Long form content is a great way to create an immersive and engaging experience.

A great example of the power of long-form content tied-in with user search intent is the article about intrusive interstitials on Smashing. Most users will call interstitials “pop-ups” because that is how many of us think of these things. In this case, in Google.com, the article ranks right after the official Google guidelines (and it makes sense that Google should be number 1 on their own branded query) but Smashing magazine is shown as a “position 0” snippet of text on the query “Google pop up guidelines” in Google.com.. Search Engine Land, a high-quality SEO blog that is a pillar of the community is ranking after Smashing (which happens to be more of a design blog than an SEO one).

Of course, these results are ever-changing thanks to machine learning, location data, language and a slew of other ranking factors. However, it is a nice indicator that user intent and long-form content are a great way to get accrued visibility from your target audience.

Large preview

If you wish to know more, you can consult a data-driven article by Neil Patel on the subject “Why 3000+ Word Blog Posts Get More Traffic (A Data-Driven Answer).”

Resources:

Tips To Design For Long Form Content

Here are a few tips to help you design for long-form content:

  • Spacing is crucial.
    White space helps make content be more scannable by the human eye.
  • Visual clues to help navigation.
    Encourage user action without taking away from the story being told.
  • Enhance content with illustrations or video animation to maintain user engagement.
  • Typography is a great way to break up text monotony and maintain the visual flow of a page.
  • Intuitive Scrolling helps make the scrolling process feel seamless. Always provide a clear navigation path through the information.
  • Provide milestones.
    Time indicators are great to give readers a sense accomplishment as they read the content.

Resources:

User Intent Is Crucial

Search engines have evolved in leaps and bounds these past few years. Google’s aim has always been to have their bot mimic human behavior to help evaluate websites. This meant that Search engine optimization has moved beyond “keywords” and seeks to understand the intent behind the search query a user types in Google.

For example, if you work to optimize content for an Android banking application and do a keyword research, you will see that oftentimes the words “free iPad” come up in North America. This doesn’t make sense until you realize that most banks used to run promotions that would offer free iPads for every new account opened. In light of this, we know that using “free iPad” as a keyword for an Android application used by a bank that is not running this type of promotion is not a good idea.

User intent matters unless you want to rank on terms that will bring you unqualified traffic. Does this mean that keyword research is now useless? Of course not! It just means that the way we approach keyword research is now infused with a UX-friendly approach.

Researching User Intent

User experience is critical for SEO. We also focus on user intent. The search queries a user makes give us valuable insights as to how people think about content, products, and services. Researching user intent can help uncover the hopes, problems, and desires of your users. Google approaches user intent by focusing on micro-moments. Micro-moments can be defined as intent profiles that seek information through search results. Here are the four big micro-moments:

  1. I want to know.
    Users want information or inspiration at this stage. The queries are quite often conversational — it starts with a problem. Since users don’t know the solution or sometimes the words to describe their interest, queries will always be a bit vaguer.
  2. I want to go.
    Location, location, location! Queries that signal a local intent are gaining ground. We don’t want any type of restaurant; the one that matters is the one that’s closest to us/the best in our area. Well, this can be seen in queries that include “near me” or a specific city or neighborhood. Localization is important to humans.
  3. I want to do.
    People also search for things that they want to do. This is where tutorials are key. Advertising promises fast weight loss, but a savvy entrepreneur should tell you HOW you can lose weight in detail.
  4. I want to buy.
    Customers showcase intent to buy quite clearly online. They want “deals” or “reviews” to make their decision.

Uncovering User Intent

Your UX or design strategy should reflect these various stages of user intent. Little tweaks in the words you make can make a big difference. So how does one go about uncovering user intent? We recommend you install Google Search Console to gain insights as to how users find you. This free tool helps you discover some of the keywords users search for to find your content. Let’s look at two tools that can help you uncover or validate user intent. Best of all, they are free!

Google Trends

Google Trends is a great way to validate if something’s popularity is on the rise, waning or steady. It provides data locally and allows you to compare two queries to see which one is more popular. This tool is free and easily accessible (compared to the Keyword Planner tool in AdWords that requires an account and more hassle).

Large preview
Answer The Public

Answer The Public is a great way to quickly see what people are looking for on Google. Better yet, you can do so by language and get a wonderful sunburst visual for your efforts! It’s not as precise as some of the tools SEO experts use but keep in mind that we’re not asking designers and UX experts to become search engine optimization gurus! Note: this tool won’t provide you stats or local data (it won’t give you data just for England for example). No need for a tutorial here, just head on over and try it out!

Large preview Large preview
Bonus Tool: Serpstat “Search Questions”

Full disclosure, I use other premium tools as part of my own SEO toolkit. Serpstat is a premium content marketing toolkit, but it’s actually affordable and allows you to dig much deeper into user intent. It helps provide me with information I never expected to find. Case in point, a few months ago, I got to learn that quite a few people in North America were confused about why bathtubs would let light shine through. The answer was easy to me; most bathtubs are made of fiberglass (not metal like in the olden days). It turns out, not everyone is clear on that and some customers needed to be reassured on this point.

If you head on to the “content marketing” section, you can access “Questions.” You can input a keyword and see how it is used in various queries. You can export the results.

This tool will also help you spy on the competition’s content marketing efforts, determine what queries your website ranks on in various countries and what your top SEO pages are.

Large preview Large preview

Resources:

Internal Linking: Because We All Have Our Favorite Pages

The links you have on your website are signaling to search engines bots which pages you find more valuable over others in your website. It’s one of the central concerns for SEOs looking to optimize contents on a site. A well-thought-out internal linking structure provide SEO and UX benefits:

  • Internal linking helps organize content based on different categories than the regular navigation;
  • It provides more ways for users to interact with your website;
  • It shows search engine bots which pages are important from your perspective;
  • It provides a clear label for each link and provides context.

Here’s a quick primer in internal linking:

  • The homepage tends to be the most authoritative page on a website. As such, it’s a great page to point to other pages you want to give an SEO boost to.
  • All pages within one link of the home page will often be interpreted by search engine bots as being important.
  • Stop using generic keyword anchors across your website. It could come across as spammy. “Read more” and “click here” provide very little context for users and bots alike.
  • Leverage navigation bars, menus, footers and breadcrumb links to provide ample visibility for your key pages.
  • CTA text should also be clear and very descriptive to encourage conversions.
  • Favor links in a piece of content: it’s highly contextual and has more weight than a generic anchor text or a footer or sidebar link that can be found across the website.
  • According to Google’s John Mueller: a link’s position in a page is irrelevant. However, SEOs tend to prefer links higher on a page.
  • It’s easier for search engines to “evaluate” links in text content vs. image anchors because oftentimes images do not come with clear, contextual ALT attributes.

Resource:

Is there a perfect linking structure at the website level and the page level? The answer is no. A website can have a different linking structure in place depending on its nature (blog, e-commerce, publication, B2B website, etc.) and the information architecture choices made (the information architecture can lead to a pyramid type structure, or something resembling a nest, a cocoon, etc.).

Large preview Large preview Large preview

Image SEO

Image SEO is a crucial part of SEO different types of websites. Blogs and e-commerce websites rely heavily on visual items to attract traffic to their website. Social discovery of content and shoppable media increase visits.

We won’t go into details regarding how to optimize your ALT attributes and file names as other articles do a fine job of it. However, let’s take a look at some of the main image formats we tend to use on the web (and that Google is able to crawl without any issues):

  • JPEG
    Best for photographs or designs with people, places or things.
  • PNG
    Best for images with transparent backgrounds.
  • GIF
    Best for animated GIFs, otherwise, use the JPG format.
Large preview

Resource:

The Lighter The Better: A Few Tips On Image Compression

Google prefers lighter images. The lighter, the better. However, you may have a hidden problem dragging you down: your CMS. You may upload one image, but your CMS could be creating many more. For example, WordPress will often create 3 to 5 variations of each image in different sizes. This means that images can quickly impact your performance. The best way to deal with this is to compress your images.

Don’t Trust Google Page Speed (A Quick Compression Algorithm Primer)

Not sure if images are dragging your performance down? Take a page from your website, put it through the online optimizer and see what the results are! If you plan on using Google Page Speed Insights, you need to consider the fact that this tool uses one specific algorithm to analyze your images. Sometimes, your images are perfectly optimized with another algorithm that’s not detected by Google’s tool. This can lead to a false positive result telling you to optimize images that are already optimized.

Tools You Can Use

If you want to get started with image compression, you can go about three ways:

  • Start compressing images in photo editing tools (most of them have an “export for the web” type of feature).
  • Install a plugin or module that is compatible with your CMS to do the work for you. Shortpixel is a good one to use for WordPress. It is freemium so you can optimize for free up to a certain point and then upgrade if you need to compress more images. The best thing about it is that it keeps a backup just in case you want to revert your changes. You can use a service like EWWWW or Short Pixel.
  • Use an API or a script to compress images for you. Kraken.io offers a solid API to get the job done. You can use a service like Image Optim or Kraken.

Lossy vs. Lossless Image Compression

Image compression comes in two flavors: lossy and lossless. There is no magic wand for optimizing images. It depends on the algorithm you use to optimize each image.

Lossy doesn’t mean bad when it comes to images. JPEGS and GIFS are lossy image formats that we use all the time online. Unlike code, you can remove data from images without corrupting the entire file. Our eyes can put up with some data loss because we are sensitive to different colors in different ways. Oftentimes, a 50% compression applied to an image will decrease its file size by 90%. Going beyond that is not worth the image degradation risks as it would become noticeable to your visitors. When it comes to lossy image compression, it’s about finding a compromise between quality and size.

Lossless image compression focuses on removing metadata from JPEG and PNG files. This means that you will have to look into other ways to optimize your load time as images will always be heavier than those optimized with a lossy compression.

Banners With Text In It

Ever open Pinterest? You will see a wall of images with text in it. The reality for many of us in SEO is that Google bot can’t read all about how to “Crack chicken noodle soup” or what Disney couple you are most like. Google can read image file names and image ALT text. So it’s crucial to think about this when designing marketing banners that include text. Always make sure your image file name and image ALT attribute are optimized to give Google a clue as to what is written on the image. If possible, favor image content with a text overlay available in the code. That way, Google will be able to read it!

Here is a quick checklist to help you optimize your image ALT attributes:

  • ALT attributes shouldn’t be too long: aim for 12 words or less.
  • ALT attributes should describe the image itself, not the content it is surrounded by (if your picture is of a palm tree, do not title it “the top 10 beaches to visit”).
  • ALT attributes should be in the proper language. Here is a concrete example: if a page is written in French, do not provide an English ALT attribute for the image in it.
  • ALT attributes can be written like regular sentences. No need to separate the words by dashes, you can use spaces.
  • ALT attributes should be descriptive in a human-friendly way. They are not made to contain a series of keywords separated by commas!
Large preview

Google Lens

Google Lens is available on Android phones and rolling out to iOS. It is a nifty little addition because it can interpret many images the way a human would. It can read text embedded in images, can recognize landmarks, books, movies and scan barcodes (which most humans can’t do!).

Of course, the technology is so recent that we cannot expect it to be perfect. Some things need to be improved such as interpreting scribbled notes. Google Lens represents a potential bridge between the offline world and the online design experience we craft. AI technology and big data are leveraged to provide meaningful context to images. In the future, taking a picture of a storefront could be contextualized with information like the name of the store, reviews, and ratings for example. Or you could finally figure out the name of a dish that you are eating (I personally tested this and Google figured out I was eating a donburi).

Here is my prediction for the long term: Google Lens will mean less stock photography in websites and more unique images to help brands. Imagine taking a picture of a pair of shoes and knowing exactly where to buy them online because Google Lens identified the brand and model along with a link to let you buy them in a few clicks?

Large preview

Resource:

Penalties For Visual Interferences On Mobile

Google has put into place new design penalties that influence a website’s mobile ranking on its results pages. If you want to know more about it, you can read an in-depth article on the topic. Bottom line: avoid unsolicited interstitials on mobile landing pages that are indexed in Google.

SEOs do have guidelines, but we do not have the visual creativity to provide tasteful solutions to comply with Google’s standards.

Essentially, marketers have long relied on interstitials as promotional tools to help them engage and convert visitors. An interstitial can be defined as something that blocks out the website’s main content. If your pop-ups cover the main content shown on a mobile screen, if it appears without user interaction, chances are that they may trigger an algorithmic penalty.

Types of intrusive interstitials, as illustrated by Google. (Large preview)

As a gentle reminder, this is what would be considered an intrusive interstitial by Google if it were to appear on mobile:

Source. (Large preview)

Tips How To Avoid A Penalty

  • No pop-ups;
  • No slide ins;
  • No interstitials that take up more than 20% of the screen;
  • Replace them with non intrusive ribbons at the top or bottom of your pages;
  • Or opt for inline optin boxes that are in the middle or at the end of your pages.

Here’s a solution that may be a bit over the top (with technically two banners on one screen) but that still stays within official guidelines:

Source: primovelo.com. Because the world needs more snow bikes and Canada! (Large preview)

Some People May Never See Your Design

More and more, people are turning to vocal search when looking for information on the web. Over 55% of teens and 41% of adults use voice search. The surprising thing is that this pervasive phenomenon is very recent: most people started in the last year or so.

Users request information from search engines in a conversational manner — keywords be damned! This adds a layer of complexity to designing a website: tailoring an experience for users who may not ever enjoy the visual aspect of a website. For example, Google Home can “read” out loud recipes or provide information straight from position 0 snippets when a request is made. This is a new spin on an old concept. If I were to ask Google Home to give me the definition of web accessibility, it would probably read the following thing out loud to me from Wikipedia:

Large preview

This is an extension of accessibility after all. This time around though, it means that a majority of users will come to rely on accessibility to reach informative content.

Designing for voice search means prioritizing your design to be heard instead of seen. For those interested in extending the design all the way to the code should look into the impact rich snippets have on how your data is structured and given visibility in search engine results pages.

Design And UX Impact SEO

Here is a quick cheat sheet for this article. It contains concrete things you can do to improve your SEO with UX and design:

  1. Google will start ranking websites based on their mobile experience. Review the usability of your mobile version to ensure you’re ready for the coming changes in Google.
  2. Check the content organization of your pages. H1, H2, and H3 tags should help create a path through the content that the bot can follow.
  3. Keyword strategy takes a UX approach to get to the core of users’ search intents to craft optimized content that ranks well.
  4. Internal linking matters: the links you have on your website are signaling to search engines bots which pages you find more valuable over others on your website.
  5. Give images more visibility: optimize file names, ALT attributes and think about how the bot “reads” your images.
  6. Mobile penalties now include pop-ups, banners and other types of interstitials. If you want to keep ranking well in Google mobile search results, avoid unsolicited interstitials on your landing pages.
  7. With the rise of assistants like Google Home and Alexa, designing for voice search could become a reality soon. This will mean prioritizing your design to be heard instead of seen.
Smashing Editorial(da, lf, ra, yk, il)

May 09 2018

11:20

Contributing To MDN Web Docs

Contributing To MDN Web Docs

Contributing To MDN Web Docs

Rachel Andrew
2018-05-09T13:20:47+02:002018-05-18T15:22:16+00:00

MDN Web Docs has been documenting the web platform for over twelve years and is now a cross-platform effort with contributions and an Advisory Board with members from Google, Microsoft and Samsung as well as those representing Firefox. Something that is fundamental to MDN is that it is a huge community effort, with the web community helping to create and maintain the documentation. In this article, I’m going to give you some pointers as to the places where you can help contribute to MDN and exactly how to do so.

If you haven’t contributed to an open source project before, MDN is a brilliant place to start. Skills needed range from copyediting, translating from English to other languages, HTML and CSS skills for creating Interactive Examples, or an interest in browser compatibility for updating Browser Compatibility data. What you don’t need to do is to write a whole lot of code to contribute. It’s very straightforward, and an excellent way to give back to the community if you have ever found these docs useful.

Contributing To The Documentation Pages

The first place you might want to contribute is to the MDN docs themselves. MDN is a wiki, so you can log in and start to help by correcting or adding to any of the documentation for CSS, HTML, JavaScript or any of the other parts of the web platform covered by MDN.

To start editing, you need to log in using GitHub. As is usual with a wiki, any editors of a page are listed, and this section will use your GitHub username. If you look at any of the pages on MDN contributors are listed at the bottom of the page, the below image shows the current contributors to the page on CSS Grid Layout.

A list showing names of people who contributed to this page The contributors to the CSS Grid Layout page. (Large preview)

What Might You Edit?

Things that you might consider as an editor are fixing obvious typos and grammatical errors. If you are a good proofreader and copyeditor, then you may well be able to improve the readability of the docs by fixing any spelling or other errors that you spot.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

You might also spot a technical error, or somewhere the specs have changed and where an update or clarification would be useful. With the huge range of web platform features covered by MDN and the rate of change, it is very easy for things to get out of date, if you spot something - fix it!

You may be able to use some specific knowledge you have to add additional information. For example, Eric Bailey has been adding Accessibility Concerns sections to many pages. This is a brilliant effort to highlight the things we should be thinking about when using a certain thing.

A screenshot of the Accessibility Concerns section This section highlights the things we should be aware of when using background-color. (Large preview)

Another place you could add to a page is in adding “See also” links. These could be links to other parts of MDN, or to external resources. When adding external resources, these should be highly relevant to the property, element or technique being described by that document. A good candidate would be a tutorial which demonstrates how to use that feature, something which would give a reader searching for information a valuable next step.

How To Edit A Document?

Once you are logged in you will see a link to Edit on pages in MDN, clicking this will take you into a WYSIWYG editor for editing content. Your first few edits are likely to be small changes, in which case you should be able to follow your nose and edit the text. If you are making extensive edits, then it would be worth taking a look at the style guide first. There is also a guide to using the WYSIWYG Editor.

After making your edit, you can Preview and then Publish. Before publishing it is a good idea to explain what you added and why using the Revision Comment field.

Screenshot of this field in the edit form Add a comment using the Revision Comment field. (Large preview)

Language Translations

Those of us with English as a first language are incredibly fortunate when it comes to information on the web, being able to get pretty much all of the information that we could ever want in our own language. If you are able to translate English language pages into other languages, then you can help to translate MDN Web Docs, making all of this information available to more people.

A screenshot showing the drop-down translations list Translations available for the background-color page. (Large preview)

If you click on the language icon on any page, you can see which languages that information has been translated into, and you can add your own translations following the information on the page Translating MDN Pages.

Interactive Examples

The Interactive Examples on MDN, are the examples that you will see at the top of many pages of MDN, such as this one for the grid-area property.

Screenshot of an Interactive Example The Interactive Example for the grid-area property. (Large preview)

These examples allow visitors to MDN to try out various values for CSS properties or try out a JavaScript function, right there on MDN without needing to head into a development environment to do so. The project to add these examples has been in progress for around a year, you can read about the project and progress to date in the post Bringing Interactive Examples to MDN.

The content for these Interactive Examples is held in the Interactive Examples GitHub repository. For example, if you wanted to locate the example for grid-area, you would find it in that repo under live-examples/css-examples/grid. Under that folder, you will find two files for grid-area, an HTML and a CSS file.

grid-area.html


<section id="example-choice-list" class="example-choice-list large" data-property="grid-area">
    <div class="example-choice" initial-choice="true">
        <pre><code class="language-css">grid-area: a;</code></pre>
        <button type="button" class="copy hidden" aria-hidden="true">
        <span class="visually-hidden">Copy to Clipboard</span>
        </button>
    </div>
    
    <div class="example-choice">
        <pre><code class="language-css">grid-area: b;</code></pre>
        <button type="button" class="copy hidden" aria-hidden="true">
        <span class="visually-hidden">Copy to Clipboard</span>
        </button>
    </div>
    
    <div class="example-choice">
        <pre><code class="language-css">grid-area: c;</code></pre>
        <button type="button" class="copy hidden" aria-hidden="true">
        <span class="visually-hidden">Copy to Clipboard</span>
        </button>
    </div> 
    
    <div class="example-choice">
        <pre><code class="language-css">grid-area: 2 / 1 / 2 / 4;</code></pre>
        <button type="button" class="copy hidden" aria-hidden="true">
        <span class="visually-hidden">Copy to Clipboard</span>
        </button>
    </div> 
</section>
    
<div id="output" class="output large hidden">
    <section id="default-example" class="default-example">
        <div class="example-container">
            <div id="example-element" class="transition-all">Example</div>
        </div>
    </section>
</div>

grid.area.css


.example-container {
    background-color: #eee;
    border: .75em solid;
    padding: .75em;
    display: grid;
    grid-template-columns: 1fr 1fr 1fr;
    grid-template-rows: repeat(3, minmax(40px, auto));
    grid-template-areas: 
    "a a a"
    "b c c"
    "b c c";
    grid-gap: 10px;
    width: 200px;
    }
    
    .example-container > div {
    background-color: rgba(0, 0, 255, 0.2);
    border: 3px solid blue;
    }
    
    example-element {
    background-color: rgba(255, 0, 200, 0.2);
    border: 3px solid rebeccapurple;
    }

An Interactive Example is just a small demo, which uses some standard classes and IDs in order that the framework can pick up the example and make it interactive, where the values can be changed by a visitor to the page who wants to quickly see how it works. To add or edit an Interactive Example, first fork the Interactive Examples repo, clone it to your machine and follow the instructions on the Contributing page to install the required packages from npm and be able to build and test examples locally.

Then create a branch and edit or create your new example. Once you are happy with it, send a Pull Request to the Interactive Examples repo to ask for your example to be reviewed. In order to keep the examples consistent, reviews are fairly nitpicky but should point out the changes you need to make in a clear way, so you can update your example and have it approved, merged and added to an MDN page.

Screenshot of a tweet asking for help with HTML examples MDN looking for help with HTML Interactive Examples. (Large preview)

With pretty much all of CSS now covered (in addition to the JavaScript examples), MDN is now looking for help to build the HTML examples. There are instructions as to how to get started in a post on the MDN Discourse Forum. Check out that post as it gives links to a Google doc that you can use to indicate that you are working on a particular example, as well as some other useful information.

The Interactive Examples are incredibly useful for people exploring the web platform, so adding to the project is an excellent way to contribute. Contributing to CSS or HTML examples requires knowledge of CSS and HTML, plus the ability to think up a clear demonstration. This last point is often the hardest part, I’ve created a lot of CSS Interactive Examples and spent more time thinking up the best example for each property than I do actually writing the code.

Browser Compat Data

Fairly recently the browser compatibility data listed on MDN Pages has begun to be updated through the Browser Compatibility Project. This project is developing browser compat data in JSON format, which can display the compatibility tables on MDN but also be useful data for other purposes.

An example screenshot of the old tables on MDN The Old Browser Compat Tables on MDN. (Large preview) An example screenshot of the new tables on MDN The New Browser Compat Tables on MDN. (Large preview)

The Browser Compatibility Data is on GitHub, and if you find a page that has incorrect information or is still using the old tables, you can submit a Pull Request. The repository contains contribution information, however, the simplest way to start is to edit an existing example. I recently updated the information for the CSS shape-outside property. The property already had some data in the new format, but it was incomplete and incorrect.

To edit this data, I first forked the Browser Compat Data so that I had my own fork. I then cloned that to my machine and created a new branch to make my changes in.

Once I had my new branch, I found the JSON file for shape-outside and was able to make my edits. I already had a good idea about browser support for the property; I also used the live example on the shape-outside MDN page to test to see support when I wasn’t sure. Therefore making the edits was a case of working through the file, checking the version numbers listed for support of the property and updating those which were incorrect.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

As the file is in JSON format is pretty straightforward to edit in any text editor. The .editorconfig file explains the simple formatting rules for these documents. There are also some helpful tips in this checklist.

Once you have made your edits, you can commit your changes, push your branch to your fork and then make a Pull Request to the Browser Compat Data repository. It’s likely that, as with the live examples, the reviewer will have some changes for you to make. In my PR for the Shapes data I had a few errors in how I had flagged the data and needed to make some changes to links. These were simple to make, and then my PR was merged.

Get Started

You can get started simply by picking something to add to and starting work on it in many cases. If you have any questions or need some help with any of this, then the MDN Discourse forum is a good place to post. MDN is the place I go to look up information, the place I send new developers and experienced developers alike, and its strength is the fact that we can all work to make it better.

If you have never made a Pull Request on a project before, it is a very friendly place to make that first PR and, as I hope I have shown, there are ways to contribute that don’t require writing any code at all. A very valuable skill for any documentation project is that of writing, editing and translating as these skills can help to make technical documentation easier to read and accessible to more people around the world.

Smashing Editorial(il)

May 08 2018

12:30

I Used The Web For A Day With JavaScript Turned Off

I Used The Web For A Day With JavaScript Turned Off

I Used The Web For A Day With JavaScript Turned Off

Chris Ashton
2018-05-08T14:30:10+02:002018-05-18T15:22:16+00:00

This article is part of a series in which I attempt to use the web under various constraints, representing a given demographic of user. I hope to raise the profile of difficulties faced by real people, which are avoidable if we design and develop in a way that is sympathetic to their needs. This week, I’m disabling JavaScript.

Why noscript Matters

Firstly, to clarify, there’s a difference between supporting a noscript experience and using the noscript tag. I don’t generally like the noscript tag, as it fragments your web page into JavaScript and non-JavaScript versions rather than working from the same baseline of content, which is how experiences get messy and things get overlooked.

You may have lots of useful content inside your noscript tags, but if I’m using a JavaScript-enabled browser, I’m not going to see any of that — I’m going to be stuck waiting for the JS experience to download. When I refer to the ‘noscript’ experience, I generally mean the experience of using the web page without JavaScript, rather than the explicit use of the tag.

Web MIDI API: Getting Started

Is it possible to use digital musical instruments as browser inputs? With the Web MIDI API, the answer is yes! The best part is, it’s fairly quick and easy to implement and even create a really fun project. Read article →

So, who cares about users who don’t have JavaScript? Do such noscript users even exist anymore?

Well, they do exist, albeit in small numbers: roughly 0.2% of users in the UK have JavaScript disabled. But looking at the numbers of users who have explicitly disabled JavaScript is missing the point.

I’m reminded of this quote by Jake Archibald:

“All your users are non-JS while they’re downloading your JS.”

Think of those users who have JavaScript enabled but who don’t get the JavaScript experience, for any number of reasons, including corporate or local blocking or stripping of JavaScript elements, existing JavaScript errors in the browser from browser add-ons and toolbars, network errors, and so on. BuzzFeed recently revealed that around 1% of requests for their JavaScript time out, equating to 13 million failed requests per month.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

Sometimes the issue isn’t with the user but with the CDN delivering the JavaScript. Remember in February 2017 when Amazon’s servers went down? Millions of sites that rely on JavaScript delivered over Amazon’s CDNs were in major trouble, costing companies in the S&P 500 index $150 million in the four-hour outage.

Think also of the emerging global markets; countries still battling to build a network of fast internet, with populations unable to afford fast hardware to run CPU-intensive JavaScript. Or think of the established markets, where even an iPhone X on a 4G connection is not immune to the effects of a partially loaded webpage interrupted by their train going into a tunnel.

The web is a hostile, unpredictable environment, which is why many developers follow the principle of progressive enhancement to build their sites up from a core experience of semantic HTML, layering CSS and unobtrusive JavaScript on top of that. I wanted to see how many sites apply this in practice. What better way than disabling JavaScript altogether?

How To Disable JavaScript

If you’d like to recreate my experiment for yourself, you can disable JavaScript by digging into the settings in Chrome:

  • Open the Developer Tools (Chrome -> View -> Developer Tools, or ⌥⌘I on the keyboard)
  • Open the developer submenu (the three dots next to the close icon on the Developer Tools)
  • Choose ‘Settings’ from this submenu
  • Find the ‘Debugger’ section and check the ‘Disable JavaScript’ box

Or, like me, you can use the excellent Toggle JavaScript Chrome Extension which lets you disable JS in one click.

Creating A WordPress Post With JavaScript Disabled

After disabling JavaScript, my first port of call was to go to my personal portfolio site — which runs on WordPress — with the aim of writing down my experiences in real time.

WordPress is actually very noscript-friendly, so I was able to start writing this post without any difficulty, although it was missing some of the text formatting and media embedding features I’m used to.

Let’s compare WordPress’ post screen with and without JavaScript:

The Noscript version of WordPress’ post page, which is made up of two basic text inputs. The noscript version of WordPress’ post page, which is made up of two basic text inputs. The JavaScript version contains shortcuts for formatting text, embedding quotes and media, and previewing the content as HTML. The JavaScript version contains shortcuts for formatting text, embedding quotes and media, and previewing the content as HTML.

I felt quite comfortable without the toolbars until I needed to embed screenshots in my post. Without the ‘Add Media’ button, I had to go to separate screens to upload my files. This makes sense, as ‘background uploading’ content requires Ajax, which requires JavaScript. But I was quite surprised that the separate media screen also required JavaScript!

Luckily, there was a fallback view:

WordPress media grid view (requires JS) The noscript version of the Media section in the admin backend. I was warned that the grid view was not supported without JavaScript. WordPress media list view (fallback) Who needs grids anyway? The list view was perfectly fine for my needs.

After uploading the image, I had to manually write an HTML img tag in my post and copy and paste the image URL into it. There was no way of determining the thumbnail URL of the uploaded image, and any captions I wrote also had to be manually copied. I soon got fed up of this approach and planned to come back the next day and re-insert all of the images when I allowed myself to use JavaScript again.

I decided to take a look at how the front-end of my site was doing.

Viewing My Site Without JavaScript

I was pleasantly surprised that my site looked largely the same without JS:

With JavaScript Personal site with JavaScript. Without JavaScript Personal site without JavaScript. Only the Twitter embed looks any different.

Let’s take a closer look at that Twitter embed:

Tweet with JavaScript Note the author information, engagement stats, and information link that we don’t get with the noscript version. The ‘tick’ is an external PNG. (Source) Tweet without JavaScript Missing styles, but contains all of the content, including hashtag link and link to tweet. The ‘tick’ is an ASCII character: ✔.

Below the fold of my site, I’ve also embedded some Instagram content, which held up well to the noscript experience.

Instagram embed with JavaScript Notice the slideshow dots underneath the image, indicating there are more images in the gallery. Instagram embed without JavaScript The noJS version doesn’t have such dots. Other than the missing slideshow functionality, this is indistinguishable from the JS version.

Finally, I have a GitHub embed on my site. GitHub doesn’t offer a native embed, so I use the unofficial GitHub Cards by Hsiaoming Yang.

GitHub embed with JavaScript The unofficial card gives a nice little snapshot and links to your GitHub profile. GitHub embed without JavaScript I provide a fallback link to GitHub if no JavaScript is available.

I was half hoping to shock you with the before and after stats (megabytes of JS for a small embed! End of the world! Let’s ditch JavaScript!), and half hoping there’d by very little difference (progressive enhancement! Leading by example! I’m a good developer!).

Let’s compare page weights with and without JavaScript. Firstly, with JavaScript:

Page weight with JavaScript 51 HTTP requests, with 1.9MB transferred.

Now without JavaScript:

Page weight without JavaScript 18 HTTP requests, with 1.3MB transferred.

For the sake of a styled tweet, a GitHub embed and a full-fat Instagram embed, my site grows an extra 600KB. I’ve also got Google analytics tracking and some nerdy hidden interactive features. All things considered, 600KB doesn’t seem over the top — though I am a little surprised by the number of additional requests the browser has to make for all that to happen.

All the content is still there without JavaScript, all the menus are still navigable, and with the exception of the Twitter embed, you’d be hard-pressed to realize that JavaScript is turned off. As a result, my site passes the NOSCRIPT-5 level of validation — the very best non-JavaScript rating possible.

ashton.codes noscript rating: NOSCRIPT-5. ✅

What’s that? You haven’t heard of the noscript classification system? I’d be very surprised if you had because I just made it up. It’s my handy little indicator of a site’s usability without JavaScript, and by extension, it’s a pretty good indicator of how good a site is at progressively enhancing its content.

noscript Classification System

Websites — or more accurately, their individual pages — tend to fall into one of the following categories:

  • NOSCRIPT-5
    The site is virtually indistinguishable from the JavaScript-enabled version of the site.
  • NOSCRIPT-4
    The site provides functionality parity for noscript, but links to or redirects to a separate version of the site to achieve that.
  • NOSCRIPT-3
    Site largely works without JavaScript, but some non-key features are unsupported or look broken.
  • NOSCRIPT-2
    The site offers message saying their browser is not supported.
  • NOSCRIPT-1
    The site appears to load, but the user is unable to use key functionality at all.
  • NOSCRIPT-0
    The site does not load at all and offers no feedback to the user.

Let’s look at some popular sites and see how they score.

Amazon

I’ve had my eye on a little robotic vacuum cleaner for a while. My lease doesn’t allow any pets, and this is the next best thing once you put some googly eyes on it.

At first glance, Amazon does a cracking job with its non-JavaScript solution, although the main product image is missing.

Amazon without JavaScript Missing the main image, but unmistakably Amazon. Amazon with JavaScript With JavaScript, we get the main image. Look at this lovely little vacuum.

On closer inspection, quite a few things were a bit broken on the noscript version. I’d like to go through them one by one and suggest a solution for each.

No Gallery Images

I wanted to see some pictures of the products, but clicking on the thumbnails gave me nothing.

Issue

Issue I clicked on these thumbnails but nothing happened.

Potential Solution

It would have been nice if these thumbnails were links to the full image, opening in a new tab. They could then be progressively enhanced into the image gallery by using JavaScript:

  • Hijack the click event of the thumbnail link;
  • Grab the href attribute;
  • Update the src attribute of the main image with the href attribute value.

The ‘Report Incorrect Product Information’ Link Is JavaScript-Only

Is this feature really so commonly used that it’s worth downloading extra bytes of JavaScript to all of your users so that it opens as an integrated modal within the page?

Issue Amazon integrated modal window (JavaScript version)

Issue

Potential solution It’s a good thing the product information looked accurate to me, because there was no way I could report any issues! The `href` attribute had a value of javascript://, which opens an integrated modal form

Potential Solution

The Amazon integrated modal form requires JavaScript to work. I would make the ‘report feature’ a standalone form on a separate URL, e.g. /report-product?product-id=123. This could be progressively enhanced into the integrated modal using Ajax to download the HTML separately.

Reviews Are Only Partially Visible By Default

Issue

Potential solution The Read more link does nothing.

Potential Solution

Why not show the whole review by default and then use JavaScript to truncate the review text and add the ‘Read more’ link?

It’s worth pointing out that the review title is a link to the review on a standalone page, so it is at least still possible to read the content.

On the whole, I was actually pleasantly surprised just how well the site worked without JavaScript. It could just as easily have been a blank white page. However, the lack of product images means we’re missing out on a really core feature — I’d argue it’s critical to be able to see what you’re buying! — so it’s a shame we couldn’t put the icing on the cake and award it a NOSCRIPT-5 rating.

Amazon noscript rating: NOSCRIPT-3. 🤷‍

I still hadn’t decided which product I wanted to buy, so I turned to Camel Camel Camel, the Amazon price tracker.

Camel Camel Camel

I wanted to decide between the iLife V3s Pro versus the iLife A4s, so headed over to https://uk.camelcamelcamel.com/. At first, the site looked indistinguishable from the JavaScript-enabled version.

Potential Solution Camel Camel Camel, looking nice and professional — with no JavaScript. no JavaScript issue You could run a git diff on these screenshots and struggle to see the difference!

Unfortunately, the price history chart did not render. It did provide an alt text fallback, but the alt text did not give me any idea of whether or not the price trend has been going up or down.

Noscript version Alt text says “Amazon price history chart” but provides no insight into the data. JavaScript version Look at this lovely chart you get when JavaScript is enabled.

General suggestion: provide meaningful alt text at all times. I don’t necessarily need to see the chart, but I would appreciate a summary of what it contains. Perhaps, in this case, it might be “Amazon price history chart showing that the price of this item has remained largely unchanged since March 2017.” But automatically generating a summary like that is admittedly difficult and prone to anomalies.

Specific suggestion for this use case: show the image. The chart on the scripted version of the site is actually a standalone image, so there’s no reason why it couldn’t be displayed on the noscript version!

Still, the core content below the chart gave me the information I needed to know.

Who needs a chart? We’ve got a table! Who needs a chart? We’ve got a table!

The table provides the feature parity needed to secure a NOSCRIPT-5 rating. I take my hat off to you, Camel Camel Camel!

Camel Camel Camel noscript rating: NOSCRIPT-5 ✅

Google Products

At this point in my day, I received a phone call out of the blue: A friend phoned me and asked about meeting up this week. So I went to Google Calendar to check my availability. Google had other ideas!

Issue Surprisingly, Google Calendar offers nothing for noscript users.

I was disappointed that there wasn’t a noscript fallback — Google is usually pretty good at this sort of thing.

I wouldn’t expect to necessarily be able to add/edit/delete entries to my calendar, but it should be possible to provide a read-only view of my calendar as core content.

Google calendar noscript rating: NOSCRIPT-0 🔥

Interested in seeing how Google manages other products, I had a quick look at Google Spreadsheets:

Issue Google Spreadsheets shows my spreadsheet but has a big warning message saying “JavaScript isn’t enabled” and won’t let me edit its contents.

In this case, the site fails a lot more gracefully. I can at least read the spreadsheet contents, even if I can’t edit them. Why doesn’t the calendar offer the same fallback solution?

I have no suggestions to improve Google Spreadsheets! It does a good job at informing the user if core functionality is missing from the noscript experience.

Google spreadsheets noscript rating: NOSCRIPT-2 😅

This rating isn’t actually that bad! Not all sites are going to be able to offer a noscript experience, but at least if they’re upfront and honest (i.e. they’ll say “yeah, we’re not going to try to give you anything”) that prepares you — the noscript user — for when it fails. You won’t waste a few precious seconds trying to fill in a form that won’t ever submit, or start reading an article that then has to use Ajax to retrieve the rest of its contents.

Now, back to my potential Amazon purchase. I wanted to look at some third-party reviews before making a purchase.

Google search works really well without JavaScript. It just looks a little dated, like those old desktop-only sites at fixed resolutions.

Noscript version noscript version has extra search options on the left (otherwise tucked away in settings on the JS version) — and no privacy banner (perhaps because ‘tracking’ is not relevant to noscript users?) JavaScript version JavaScript version has the ability to search via voice input, and the ‘privacy reminder’ message.

The images view looks even more different, and I actually prefer it in a few ways — this version loads super quickly and lists the dimensions and image size in kilobytes underneath each thumbnail:

Noscript version noscript version: notice the image meta information which is not supplied on the scripted version! JavaScript version JavaScript version: notice the ‘related search terms’ area which is not supplied on the noscript version.

Google Search noscript rating: NOSCRIPT-5 ✅

One of the search results took me to a review on YouTube. I clicked, not expecting much. I was right not to get excited:

Issue YouTube doesn’t offer much of a noscript experience.

I wouldn’t really expect a site like YouTube to work without JavaScript. YouTube requires advanced streaming capabilities, not to mention that it would open itself up to copy theft if it provided a standalone MP4 download as a fallback. In any case, no site should look broken. I stared at this screen for a few seconds before realizing that nothing else was going to happen.

Suggestion: If your site is not able to provide a fallback solution for noscript users, at a minimum you should provide a noscript warning message.

YouTube noscript rating: NOSCRIPT-0 🔥

Which?

I clicked a couple more review links. The Which? advice site failed me completely.

Issue The site says there are 10 good vacuums to choose from, but the list is clearly populated with Ajax or something as I’m seeing nothing.

This was a page that looked like it loaded fine, but only when you read the content would you realize you must actually be missing some key information. That key information is absolutely core to the purpose of the page, and I can’t get it. Therefore, unfortunately, that’s a NOSCRIPT-1 violation.

Suggestion: If your site Ajaxes in content, that content exists at another URL. Provide a link to that content for your noscript users. You can always hide the link when you’ve successfully Ajaxed with JavaScript.

Which? review site noscript rating: NOSCRIPT-1 😫

Facebook

Eventually, I concede that I can’t really afford a vacuum right now. So, I decided to hop onto social media.

Facebook says JavaScript is required to proceed, or we can click the link to the mobile site. Facebook says JavaScript is required to proceed, or we can click the link to the mobile site.

Facebook flat-out refuses to load without JavaScript, but it does offer a fallback option. Here’s our first example of a NOSCRIPT-4 — a site which offers a separate version of its content for noscript or feature phone users.

The mobile site version of Facebook. The mobile site version of Facebook.

The mobile version loads instantly. It looks ugly, but it seems as though I get the same content as I normally would. Crucially, I have feature parity: I can accomplish the same things here as I can on the main site.

Facebook noscript rating: NOSCRIPT-4 🤔

The page loaded at lightning speed:

50.8KB. Page loaded in 1.39 seconds 50.8KB. Page loaded in 1.39 seconds

I could only see 7 items in the news feed at any one time, but I could click to “See More Stories,” which takes me to a new page, using traditional pagination techniques.

I find myself impressed that I have the option to ‘react’ to a Facebook comment, though this is a multi-screen task:

Reacting first requires you to click ‘React’… Reacting first requires you to click ‘React’… …which then takes you to a separate screen to choose your reaction. …which then takes you to a separate screen to choose your reaction.

There’s nothing stopping Facebook building a hover ‘reaction’ menu in non-JavaScript, but to be fair this is aimed at mobile devices that aren’t able to hover.

Suggestion: Get creative with CSS. You may find that you don’t need JavaScript at all.

Before long, a video item came up in my news feed. (At this point, it dawned on me just how much less video content I had seen on the mobile version compared to normal Facebook, meaning I’d actually been seeing peoples’ statuses rather than a random video they ‘liked’ — a major improvement as far as I’m concerned!)

I fully expected the video not to work when I clicked it, but clicking on the thumbnail opened the video in a new tab:

You don’t need JavaScript to play MP4 files You don’t need JavaScript to play MP4 files.

I’m pleasantly surprised that all of the functionality appears to be there on this noscript version of the site. Eventually, however, I found one feature that was just too clunky and cumbersome to see through to the end: album creation.

I wanted to upload a photo album to Facebook, but in noscript-land this is a beast of a task. It involves uploading one photo at a time, going through two or three screens for each upload. I desperately tried and failed to find a bulk-upload option.

The laboriousness of this got to me after photo number three (my album will contain many more), so I decided to call it a day and come back tomorrow when I’ve got JavaScript.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Twitter

Things got weird when I flew over to Twitter.

Twitter on first load On first load, I got what looked like the normal desktop site. Twitter redirect site After a couple of seconds, I was automatically redirected to the mobile site.

I was intrigued by this mechanism, so dug into the source code, which was actually surprisingly simple:

<noscript><meta http-equiv="refresh" content="0; URL=https://mobile.twitter.com/i/nojs_router?path=%2F"></noscript>

As beautifully simple as this solution is, I found the experience quite clunky because in the flash before I was redirected, I saw that one of the people I follow on Twitter had got engaged. His tweet didn’t appear at the top of the ‘mobile’ version, so I had to go looking for it.

Suggestion: Build in a grace period into your server-side logic so that redirects and careless refreshes don’t lose interesting tweets before you’ve had a chance to read them.

I couldn’t remember my friend’s Twitter handle. Searching was a little tricky — I started to really miss the autofill suggestions!

A screenshot of me filling in 'andy' as a search term, but no autofill suggestions appearing as I type. No autofill suggestions appeared as I typed.

Luckily, the search results page brought his account right up, and I was able to find his tweet. I was even able to reply.

Twitter noscript rating: NOSCRIPT-4 🤔

This may seem like a generous score, given the clunky feel, but remember that the key thing here is feature parity. It doesn’t have to look beautiful.

I tried out a couple more social media sites, which, unlike Twitter, didn’t reach the dizzy heights of NOSCRIPT-4 compliance.

Other Social Networks

LinkedIn has a nice, bespoke loading screen. But it never loads, so all I could do was stare at the logo.

LinkedIn

LinkedIn noscript rating: NOSCRIPT-0 🔥

Instagram gave me literally nothing. A blank page. A whole other flavor of NOSCRIPT-0.

Instagram

Instagram noscript rating: NOSCRIPT-0 🔥🔥🔥

I was surprised Instagram failed so spectacularly here, given that the Instagram embed worked flawlessly on my portfolio site. I guess with an embed you never know what the browser support expectations of the third party are, but as I’m visiting the site directly, Instagram is happy making the call to drop support.

BBC News

I headed over to the BBC to get my fix of news.

BBC without JavaScript In the noscript version, notice the narrow column and the single story with thumbnail. BBC with JavaScript JavaScript version: notice the full use of the desktop screen and multiple article thumbnails.

The menu is a little bit off, and the column is quite narrow (definitely a pattern I’m seeing on a lot of sites — why does “no JavaScript” mean “mobile device”?) but I am able to access the content.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

I clicked on the ‘Most Read’ tab, which takes me to another part of the page. With scripting, this anchor link is progressively enhanced to achieve actual tab behavior, which is a lovely example of building up from a solid HTML core.

Issue

So far, this is the only example of an anchor link I’ve come across in my experiment, which is a shame as it’s a nice technique that saves an additional page load and saves fragmenting the site into lots of micro pages.

It does look a little odd though, the ordered list CSS meaning we have a double numbering glitch here. I click on one of the stories.

The article should contain a video, but instead reads “Media playback is unsupported on your device”. There is no transcript. The article should contain a video, but instead reads “Media playback is unsupported on your device”. There is no transcript.

I can’t access the video content, but due to rights issues, I suspect the BBC cannot provide a separate standalone video as Facebook does. A transcript would be nice though — and beneficial to more than just noscript users.

Suggestion: Provide textual fallbacks for audio-visual content.

To be fair, the article content basically sums up the content that appears in the video, so I’m not really missing out on information.

The article and index pages load lightning-fast, at about 300KB (mostly images). I do miss the thumbnail images for the other articles on the page, and the ability to make full use of my screen real estate — but that shouldn’t hamper the rating.

BBC noscript rating: NOSCRIPT-5 ✅

GitHub

GitHub looks almost exactly the same as its JavaScript-enabled counterpart. Wow! But I guess this is a site developed by developers, for developers. 😉

GitHub with JavaScript The one difference I can see is the way GitHub deals with time. With JavaScript enabled, notice how it says ‘2 days ago’... GitHub without JavaScript On the no script version, it instead says “Mar 1, 2018”.

I did a little housekeeping on GitHub, looking around repos and deleting old branches. For a while I genuinely forgot I was on the non-JavaScript version until I came across one little bug:

The “Fetching latest commit…” section will spin forever… The “Fetching latest commit…” section will spin forever…

Then I wondered, “How is GitHub going to handle applying labels to issues?” so I gave that a go.

These fields are unresponsive when you click on them. These fields are unresponsive when you click on them.

I was unable to create an issue and add labels to it at the same time. In fact, I couldn’t find any way of adding the label even after creating a blank issue. It’s a shame the site fell at the last hurdle because it was very nearly a seamless comparison with the scripted version.

GitHub noscript rating: NOSCRIPT-3 🤗

While GitHub looks incredible — I would never have known my JavaScript was turned off — not being able to use the same key functionality as the scripted version is a bummer. Even an ugly looking noscript site would get a higher score because functionality is more important than form.

Online Banking

If there’s one place I expected JavaScript to be required, it was on the NatWest bank website. I was wrong.

Not only does it work, but it’s also hard to distinguish from the normal site. The login screen is the same, the only difference being that the focus doesn’t automatically progress through each field as you complete it.

NatWest noscript rating: NOSCRIPT-5 ✅

Miscellaneous

I came across a few more sites throughout my day.

FreeAgent — the tax software site I use for my freelancing — doesn’t even try a noscript fallback. But hey, that’s better than showing a broken website.

FreeAgent shows a no-JavaScript message. FreeAgent shows a no-JavaScript message.

FreeAgent noscript rating: NOSCRIPT-2 ⛔

And CodePen, somewhat understandably, has to be a NOSCRIPT-2 too.

A CodePen shows a no-JavaScript message, and suggests it would be pretty foolish to expect the site to work without JavaScript!

CodePen noscript rating: NOSCRIPT-2 ⛔

Tonik, the energy provider, doesn’t let me log in, but this seems like an oversight rather than a deliberate decision:

I see the words “embedded area” where I’m supposed to see a login form. I see the words “embedded area” where I’m supposed to see a login form.

Tonik noscript rating: NOSCRIPT-1 ❌

M&S Energy lets me log in — only to tell me it needs JavaScript to do anything remotely useful.

M&S requires JavaScript to work, but you have to put more effort in to get to that point. M&S requires JavaScript to work, but you have to put more effort in to get to that point.

M&S noscript rating: NOSCRIPT-1 ❌

Now I come to my favorite screenshot of the day.

One of my colleagues once recommended an Accessibility for Web Design course, which I bookmarked. I decided to take a look at it today, and laughed at the irony of the alt text:

Alt text of “Personas: Accessibility for Web Design”. Soooo… what am I missing? Alt text of “Personas: Accessibility for Web Design”. Soooo… what am I missing?

With the alt text of “Personas: Accessibility for Web Design,” I’m not too sure what I’m missing here — is it an image? A video? A PDF? The course itself?

Hint: It’s actually a video, though you have to be logged in to watch it.

The alt text isn’t really supporting its purpose, partly because it’s populated automatically. We as a dev community need to get better at this sort of thing. I don’t think I’ve read any useful alt text today.

Summary

I started this experiment with the aim of seeing how many sites are implemented using progressive enhancement. I’ve only visited a tiny handful of sites here, most of them big names with big budgets, so it’s interesting to see the wide variation in no-JavaScript support.

It’s interesting to see that relatively simple sites — Instagram and LinkedIn particularly — have such poor noscript support. I believe this is partly down to the ever-growing popularity of JavaScript frameworks such as React, Angular, and Vue. Developers are now building “web applications” rather than “websites,” with the aim of recreating the look and feel of native apps, and using JavaScript to manage the DOM is the most manageable way of creating such experiences.

There is a danger that more and more sites will require JavaScript to render any content at all. Luckily, it is usually possible to build your content in the same, developer-friendly way but rendered on the server, for example by using Preact instead of React. Making the conscious decision to care about noscript gives the benefits of a core experience as outlined at the beginning of this article, and can make for a faster perceived loading time, too.

It can be quite daunting to think about an application from the ground up, but a decent core experience is usually possible and actually only involves simple tweaks in a lot of cases. A good core experience is indicative of a well-structured web page, which, in turn, is usually a good sign for SEO and for accessibility. It’s usually a well designed web page, as the designer and developer have spent time and effort thinking about what’s truly core to the experience. Progressive enhancement means more robust experiences, with fewer bugs in production and fewer individual browser quirks, because we’re letting the platform do the job rather than trying to write it all from scratch.

What noscript rating does your site conform to? Let us know in the comments!

Smashing Editorial(rb, ra, il)

May 07 2018

10:30

New CSS Features That Are Changing Web Design

New CSS Features That Are Changing Web Design

New CSS Features That Are Changing Web Design

Zell Liew
2018-05-07T12:30:10+02:002018-05-18T15:22:16+00:00

There was a time when web design got monotonous. Designers and developers built the same kinds of websites over and over again, so much so that we were mocked by people in our own industry for creating only two kinds of websites:

Is this the limit of what our “creative” minds can achieve? This thought sent an incontrollable pang of sadness into my heart.

I don’t want to admit it, but maybe that was the best we could accomplish back then. Maybe we didn’t have suitable tools to make creative designs. The demands of the web were evolving quickly, but we were stuck with ancient techniques like floats and tables.

Today, the design landscape has changed completely. We’re equipped with new and powerful tools — CSS Grid, CSS custom properties, CSS shapes and CSS writing-mode, to name a few — that we can use to exercise our creativity.

What if there was a web conference without... slides? Meet SmashingConf Toronto 2018 🇨🇦 with live sessions exploring how experts work behind the scenes. Dan Mall, Lea Verou, Sara Soueidan, Seb Lee-Delisle and many others. June 26–27. With everything from live designing to live performance audits.

Check the speakers →

How CSS Grid Changed Everything

Grids are essential for web design; you already knew that. But have you stopped to asked yourself how you designed the grid you mainly use?

Most of us haven’t. We use the 12-column grid that has became a standard in our industry.

  • But why do we use the same grid?
  • Why are grids made of 12 columns?
  • Why are our grids sized equally?

Here’s one possible answer to why we use the same grid: We don’t want to do the math.

In the past, with float-based grids, to create a three-column grid, you needed to calculate the width of each column, the size of each gutter, and how to position each grid item. Then, you needed to create classes in the HTML to style them appropriately. It was quite complicated.

To make things easier, we adopted grid frameworks. In the beginning, frameworks such as 960gs and 1440px allowed us to choose between 8-, 9-, 12- and even 16-column grids. Later, Bootstrap won the frameworks war. Because Bootstrap allowed only 12 columns, and changing that was a pain, we eventually settled on 12 columns as the standard.

But we shouldn’t blame Bootstrap. It was the best approach back then. Who wouldn’t want a good solution that works with minimal effort? With the grid problem settled, we turned our attention to other aspects of design, such as typography, color and accessibility.

Now, with the advent of CSS Grid, grids have become much simpler. We no longer have to fear grid math. It’s become so simple that I would argue that creating a grid is easier with CSS than in a design tool such as Sketch!

Why?

Let’s say you want to make a 4-column grid, each column sized at 100 pixels. With CSS Grid, you can write 100px four times in the grid-template-columns declaration, and a 4-column grid will be created.

.grid {
  display: grid;
  grid-template-columns: 100px 100px 100px 100px;
  grid-column-gap: 20px;
}
Screenshot of Firefox's grid inspector that shows four columns. You can create four grid columns by specifying a column-width four times in grid-template-columns

If you want a 12-column grid, you just have to repeat 100px 12 times.

.grid {
  display: grid;
  grid-template-columns: 100px 100px 100px 100px 100px 100px 100px 100px 100px 100px 100px 100px;
  grid-column-gap: 20px;
}
Screenshot of Firefox's grid inspector that shows twelve columns. Creating 12 columns with CSS Grid

Yes, the code isn’t beautiful, but we’re not concerned with optimizing for code quality (yet) — we’re still thinking about design. CSS Grid makes it so easy for anyone — even a designer without coding knowledge — to create a grid on the web.

If you want to create grid columns with different widths, you just have to specify the desired width in your grid-template-columns declaration, and you’re set.

.grid {
  display: grid;
  grid-template-columns: 100px 162px 262px;
  grid-column-gap: 20px;
}
Screenshot of Firefox's grid inspector that shows three colums of different width. Creating columns of different widths is easy as pie.

Making Grids Responsive

No discussion about CSS Grid is complete without talking about the responsive aspect. There are several ways to make CSS Grid responsive. One way (probably the most popular way) is to use the fr unit. Another way is to change the number of columns with media queries.

fr is a flexible length that represents a fraction. When you use the fr unit, browsers divide up the open space and allocate the areas to columns based on the fr multiple. This means that to create four columns of equal size, you would write 1fr four times.

.grid {
  display: grid;
  grid-template-columns: 1fr 1fr 1fr 1fr;
  grid-column-gap: 20px;
}
GIF shows four columns created with the fr unit. These columns resize according to the available white spaceGrids created with the fr unit respect the maximum width of the grid. (Large preview)

Let’s do some calculations to understand why four equal-sized columns are created.

First, let’s assume the total space available for the grid is 1260px.

Before allocating width to each column, CSS Grid needs to know how much space is available (or leftover). Here, it subtracts grip-gap declarations from 1260px. Since each gap 20px, we’re left with 1200px for the available space. (1260 - (20 * 3) = 1200).

Next, it adds up the fr multiples. In this case, we have four 1fr multiples, so browsers divide 1200px by four. Each column is thus 300px. This is why we get four equal columns.

However, grids created with the fr unit aren’t always equal!

When you use fr, you need to be aware that each fr unit is a fraction of the available (or leftover) space.

If you have an element that is wider than any of the columns created with the fr unit, the calculation needs to be done differently.

For example, the grid below has one large column and three small (but equal) columns even though it’s created with grid-template-columns: 1fr 1fr 1fr 1fr.

See the Pen CSS Grid `fr` unit demo 1 by Zell Liew (@zellwk) on CodePen.

After splitting 1200px into four and allocating 300px to each of the 1fr columns, browsers realize that the first grid item contains an image that is 1000px. Since 1000px is larger than 300px, browsers choose to allocate 1000px to the first column instead.

That means, we need to recalculate leftover space.

The new leftover space is 1260px - 1000px - 20px * 3 = 200px; this 200px is then divided by three according to the amount of leftover fractions. Each fraction is then 66px. Hopefully that explains why fr units do not always create equal-width columns.

If you want the fr unit to create equal-width columns everytime, you need to force it with minmax(0, 1fr). For this specific example, you’ll also want to set the image’s max-width property to 100%.

See the Pen CSS Grid `fr` unit demo 2 by Zell Liew (@zellwk) on CodePen.

Note: Rachel Andrew has written an amazing article on how different CSS values (min-content, max-content, fr, etc.) affect content sizes. It’s worth a read!

Unequal-Width Grids

To create grids with unequal widths, you simply vary the fr multiple. Below is a grid that follows the golden ratio, where the second column is 1.618 times of the first column, and the third column is 1.618 times of the second column.

.grid {
  display: grid;
  grid-template-columns: 1fr 1.618fr 2.618fr;
  grid-column-gap: 1em;
}
GIF shows a three-column grid created with the golden ratio. When the browser is resized, the columns resize accordingly. A three-column grid created with the golden ratio

Changing Grids At Different Breakpoints

If you want to change the grid at different breakpoints, you can declare a new grid within a media query.

.grid {
  display: grid;
  grid-template-columns: 1fr 1fr;
  grid-column-gap: 1em;
}

@media (min-width: 30em) {
  .grid {
    grid-template-columns: 1fr 1fr 1fr 1fr;
  }
}

Isn’t it easy to create grids with CSS Grid? Earlier designers and developers would have killed for such a possibility.

Height-Based Grids

It was impossible to make grids based on the height of a website previously because there wasn’t a way for us to tell how tall the viewport was. Now, with viewport units, CSS Calc, and CSS Grid, we can even make grids based on viewport height.

In the demo below, I created grid squares based on the height of the browser.

See the Pen Height based grid example by Zell Liew (@zellwk) on CodePen.

Jen Simmons has a great video that talks about desgining for the fourth edge — with CSS Grid. I highly recommend you watch it.

Grid Item Placement

Positioning grid items was a big pain in the past because you had to calculate the margin-left property.

Now, with CSS Grid, you can place grid items directly with CSS without the extra calculations.

.grid-item {
  grid-column: 2; /* Put on the second column */
}
Screenshot of a grid item placed on the second column Placing an item on the second column.

You can even tell a grid item how many columns it should take up with the span keyword.

.grid-item {
  /* Put in the second column, span 2 columns */
  grid-column: 2 / span 2;
}
Screenshot of a grid item that's placed on the second column. It spans two columns You can tell grid items the number of columns (or even rows) they should occupy with the span keyword

Inspirations

CSS Grid enables you to lay things out so easily that you can create a lot of variations of the same website quickly. One prime example is Lynn Fisher’s personal home page.

If you’d like to find out more about what CSS Grid can do, check out Jen Simmon’s lab, where she explores how to create different kinds of layouts with CSS Grid and other tools.

To learn more about CSS Grid, check out the following resources:

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Designing With Irregular Shapes

We are used to creating rectangular layouts on the web because the CSS box model is a rectangle. Besides rectangles, we’ve also found ways to create simple shapes, such as triangles and circles.

Today, we don’t need to stop there. With CSS shapes and clip-path at our disposal, we can create irregular shapes without much effort.

For example, Aysha Anggraini experimented with a comic-strip-inspired layout with CSS Grid and clip path.

See the Pen Comic-book-style layout with CSS Grid by Aysha Anggraini (@rrenula) on CodePen.

Hui Jing explains how to use CSS shapes in a way that allows text to flow along the Beyoncé curve.

An image of Huijing's article, where text flows around Beyoncé. Text can flow around Beyoncé if you wanted it to!

If you’d like to dig deeper, Sara Soueidan has an article to help you create non-rectangular layouts.

CSS shapes and clip-path give you infinite possibilities to create custom shapes unique to your designs. Unfortunately, syntax-wise, CSS shapes and clip-path aren’t as intuitive as CSS Grid. Luckily, we have tools such as Clippy and Firefox’s Shape Path Editor to help us create the shapes we want.

Image of Clippy, a tool to help you create custom CSS shapes Clippy helps you create custom shapes easily with clip-path.

Switching Text Flow With CSS’ writing-mode

We’re used to seeing words flow from left to right on the web because the web is predominantly made for English-speaking folks (at least that’s how it started).

But some languages don’t flow in that direction. For example, Chinese words can read top down and right to left.

CSS’ writing-mode makes text flow in the direction native to each language. Hui Jing experimented with a Chinese-based layout that flows top down and right to left on a website called Penang Hokkien. You can read more about her experiment in her article, “The One About Home”.

Besides articles, Hui Jing has a great talk on typography and writing-mode, “When East Meets West: Web Typography and How It Can Inspire Modern Layouts”. I highly encourage you to watch it.

An image of the Penang Hokken, showcasing text that reads from top to bottom and right to left. Penang Hokkien shows that Chinese text can be written from top to bottom, right to left.

Even if you don’t design for languages like Chinese, it doesn’t mean you can’t apply CSS’ writing-mode to English. Back in 2016, when I created Devfest.asia, I flexed a small creative muscle and opted to rotate text with writing-mode.

An image that shows how I rotated text in a design I created for Devfest.asia Tags were created by using writing mode and transforms.

Jen Simmons’s lab contains many experiments with writing-mode, too. I highly recommend checking it out, too.

An image from Jen Simmon's lab that shows a design from Jan Tschichold. An image from Jen Simmon's lab that shows Jan Tschichold

Effort And Ingenuity Go A Long Way

Even though the new CSS tools are helpful, you don’t need any of them to create unique websites. A little ingenuity and some effort go a long way.

For example, in Super Silly Hackathon, Cheeaun rotates the entire website by -15 degrees and makes you look silly when reading the website.

A screenshot from Super Silly Hackthon, with text slightly rotated to the left Cheeaun makes sure you look silly if you want to enter Super Silly Hackathon.

Darin Senneff made an animated login avatar with some trigonometry and GSAP. Look at how cute the ape is and how it covers its eyes when you focus on the password field. Lovely!

When I created the sales page for my course, Learn JavaScript, I added elements that make the JavaScript learner feel at home.

Image where I used JavaScript elements in the design for Learn JavaScript. I used the function syntax to create course packages instead of writing about course packages

Wrapping Up

A unique web design isn’t just about layout. It’s about how the design integrates with the content. With a little effort and ingenuity, all of us can create unique designs that speak to our audiences. The tools at our disposal today make our jobs easier.

The question is, do you care enough to make a unique design? I hope you do.

Smashing Editorial(ra, il, al)

May 04 2018

12:00

Fast UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process

Fast UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process

Fast UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process

Zoe Dimov
2018-05-04T14:00:27+02:002018-05-18T15:22:16+00:00

Today, UX research has earned wide recognition as an essential part of product and service design. However, UX professionals still seem to be facing two big problems when it comes to UX research: A lack of engagement from the team and stakeholders as well as the pressure to constantly reduce the time for research.

In this article, I’ll take a closer look at each of these challenges and propose a new approach known as ‘FAST UX’ in order to solve them. This is a simple but powerful tool that you can use to speed up UX research and turn stakeholders into active champions of the process.

Contrary to what you might think, speeding up the research process (in both the short and long term) requires effective collaboration, rather than you going away and soldiering on by yourself.

The acronym FAST (Focus, Attend, Summarise, Translate) wraps up a number of techniques and ideas that make the UX process more transparent, fun, and collaborative. I also describe a 5-day project with a central UK government department that shows you how the model can be put into practice.

The article is relevant for UX professionals and the people who work with them, including product owners, engineers, business analysts, scrum masters, marketing and sales professionals.

1. Lack Of Engagement Of The Team And Stakeholders

“Stakeholders have the capacity for being your worst nightmare and your best collaborator.”

UIE (2017)

As UX researchers, we need to ensure that “everyone in our team understands the end users with the same empathy, accuracy and depth as we do.” It has been shown that there is no better alternative to increasing empathy than involving stakeholders to actually experience the whole process themselves: from the design of the study (objectives, research questions), to recruitment, set up, fieldwork, analysis and the final presentation.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Anyone who has tried to do this knows that it can be extremely difficult to organize and get stakeholders to participate in research. There are two main reasons for this:

  1. Research is somebody else’s job.
    In my experience, UX professionals are often hired to “do the UX” for a company or organization. Even though the title of “Lead UX Researcher” sounds great and very important in my head, it often leads to misconceptions during kick-off meetings. Everyone automatically assumes that research is solely MY responsibility. It’s no wonder that stakeholders don’t want to get involved in the project. They assume research is my and nobody else’s job.
  2. UX process frameworks are incomplete.
    The problem is that even when stakeholders want to engage and participate in UX, they still do not know *how* they should get involved and *what* they should do. We spend a lot of time selling a UX process and research frameworks that are useful but ultimately incomplete — they do not explain how non-researchers can get involved in the research process.
Problems associated with stakeholders involvement in UX Research. Fig. 1. Despite our enthusiasm as researchers, stakeholders often don’t understand how to get involved with the research process.

Further, a lot of stakeholders can find words such as ‘design,’ ‘analysis’ or ‘fieldwork’ intimidating or irrelevant to what they do. In fact, “UX is rife with jargon that can be off-putting to people from other fields.” In some situations, terms are familiar but mean something completely different, e.g., research in UX versus marketing research.

2. Pressure To Constantly Reduce The Time For Research

Another issue is that there is a constantly growing pressure to speed up the UX process and reduce the time spent on research. I cannot count the number of times when a project manager asked me to shorten a study even further by skipping the analysis stage or the kick-off sessions.

While previously you could spend weeks on research, a 5-day research cycle is increasingly becoming the norm. In fact, the book Sprint describes how research can dwindle to just a day (from an overall 5-day cycle).

Considering this, there is a LOT of pressure on UX researchers to deliver fast, without compromising the quality of the study. The difficulty increases when there are multiple stakeholders, each with their own opinions, demands, views, assumptions, and priorities.

The Fast UX Approach

Contrary to what you might think, reducing the time it takes to do UX research does not mean that you need to soldier on by yourself. I have done this and it only works in the short term. It does not matter how amazing the findings are — there is not enough PowerPoint slides in the world to convince a team of the urgency to take action if they have not been on the research journey themselves.

In the long term, the more actively engaged your team and stakeholders are in the research, the more empowered they will feel and the more willing they will be to take action. Productive collaboration also means that you can move together at a quicker pace and speed up the whole research process.

The FAST UX Research framework (see Fig. 2 below) is a tool to truly engage team members and stakeholders in a way that turns them into active advocates and champions of the research process. It shows non-researchers when and how they should get involved in UX Research.

The FAST UX Research approach; FAST UX Research methodology. Fig. 2. The FAST User Experience Research framework

In essence, stakeholders take ownership of each of the UX research stages by carrying out the four activities, each corresponding to its research stage.

Working together reduces the time it takes for UX Research. The true benefit of the approach, however, is that, in the long term, it takes less and less time for the business to take action based on research findings as people become true advocates of user-centricity and the research process.

This approach can be applied to any qualitative research method and with any team. For example, you can carry out FAST usability testing, FAST interviews, FAST ethnography, and so on. In order to be effective, you will need to explain this approach to your stakeholders from the start. Talk them through the framework, explaining each stage. Emphasize that this is what EVERYONE does, that it’s their work as much as the UX researcher’s job, and that it’s only successful if everyone is involved throughout the process.

Stage 1: Focus (Define A Common Goal)

There is a uniform consensus within UX that a research project should start by defining its purpose: why is this research done and how will the results be acted upon?

Focus in FAST UX Research; first stage in the FAST UX Research process. Fig. 3. Focus is about defining clear objectives and goals for the research and it’s ultimately the team’s and all stakeholders’ shared responsibility to do this.

Generally, this is expressed within the research goals, objectives, research questions and/or hypotheses. Most projects start with a kick-off meeting where those are either discussed (based on an available brief) or are defined during the meeting.

The most regular problem with kick-off sessions like these is that stakeholders come up with too many things they want to learn from a study. The way to turn the situation around is to assign a specific task to your immediate team (other UX professionals you work with) and stakeholders (key decision makers): they will help focus the study from the beginning.

The way they will do that is by working together through the following steps:

  1. Identify as a group the current challenges and problems.
    Ask someone to take notes on a shared document; alternatively, ask everyone to participate and write on sticky notes which are then displayed on a “project wall” for everyone to see.
  2. Identify the potential objectives and questions for a research study.
    Do this the same way you did the previous step. You don’t need to commit to anything yet.
  3. Prioritize.
    Ask the team to order the objectives and questions, starting with the most important ones.
  4. Reword and rephrase.
    Look at the top 3 questions and objectives. Are they too broad or narrow? Could they be reworded so it’s clearer what is the focus of the study? Are they feasible? Do you need to split or merge objectives and questions?
  5. Commit to be flexible.
    Agree on the top 1-2 objectives and ensure that you have agreement from everyone that this is what you will focus on.

Here are some questions you can ask to help your stakeholders and team to get to the focus of the study faster:

  • From the objectives we have recognized, what is most important?
  • What does success look like?
  • If we only learn one thing, which one would be the most important one?

Your role during the process is to provide expertise to determine if:

  • The identified objectives and questions are feasible for a single study;
  • Help with the wording of objectives and questions;
  • Design the study (including selecting a methodology) after the focus has been identified.

At first sight, the Focus and Attend (next stages) activities might be familiar as you are already carrying out a kick-off meeting and inviting stakeholders to attend research sessions.

However, adopting a FAST approach means that your stakeholders have as much ownership as you do during the research process because work is shared and co-owned. Reiterate that the process is collaborative and at the end of the session, emphasize that agreeing on clear research objectives is not easy. Remind everybody that having a shared focus is already better than what many teams start with.

Finally, remind the team and your stakeholders what they need to do during the rest of the process.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

Stage 2: Attend (Immerse The Team Deeply In The Research Process)

Seeing first hand the experience of someone using a product or service is so rich that there is no substitute for it. This is why getting stakeholders to observe user research is still considered one of the best and most powerful ways to engage the team.

Attend in FAST UX Research; second stage in FAST UX Research. Fig. 4. Attend in FAST UX Research is about encouraging the team and stakeholders to be present at all research sessions, but also to be actively engaged with the research.

What often happens is that observers join in on the day of the research study and then they spend the time plastered to their laptops and mobile phones. What is worse, some stakeholders often talk to the note-taker and distract the rest of the design team who need to observe the sessions.

This is why it is just as important that you get the team to interact with the research. The following activities allow the team to immerse themselves in the research session. You can ask stakeholders to:

  • Ask questions during the session through a dedicated live chat (e.g. Slack, Google Hangouts, Skype);
  • Take notes on sticky notes;
  • Summarize observations for everyone (see next stage).

Assign one person per session for each of these activities. Have one “live chat manager,” one “note-taker,” and one “observer” who will sum up the session afterwards.

Rotate people for the next session.

Before the session, it’s useful to walk observers through the ‘ground rules’ very briefly. You can have a poster similar to the one GDS developed that will help you do this and remind the team of their role during the study (see Fig. 3 above).

An observation poster; user research poster. Fig. 5. A poster can be hanged in the observation room and used to remind the team and stakeholders what their responsibilities are and the ground rules during observation.

Farrell (2017) provides more detail on effective ways for stakeholders to take notes together. When you have multiple stakeholders and it’s not feasible for them to physically attend a field visit (e.g. on the street, in an office, at the home of the participant), you could stream the session to an observation room.

Stage 3: Summarize (Analysis For Non-Researchers)

I am a strong supporter of the idea that analysis starts the moment fieldwork begins. During the very first research session, you start looking for patterns and interpretation of what the data you have means.

Summarize in FAST UX Research; the third stage in FAST UX Research. Fig. 6. Summarize in FAST UX Research is about asking the team and your stakeholders to tell you about what they thought were the most interesting aspects of user research.

Even after the first session (but typically towards the end of fieldwork) you can carry out collaborative analysis: a fun and productive way that ensures that you have everyone participating in one of the most important stages of research.

The collaborative analysis session is an activity where you provide an opportunity for everyone to be heard and create a shared understanding of the research.

Since you’re including other experts’ perspectives, you’re increasing the chances to identify more objective and relevant insights, and also for stakeholders to act upon the results of the study.

Even though ‘analysis’ is an essential part of any research project, a lot of stakeholders get scared by the word. The activity sounds very academic and complex. This is why at the end of each research session, research day, or the study as a whole, the role of your stakeholders and immediate team is to summarize their observations. Summarizing may sound superfluous but is an important part of the analysis stage; this is essentially what we do during “Downloading” sessions.

Listening to someone’s summary provides you with an opportunity to understand:

  • What they paid attention to;
  • What is important for them;
  • Their interpretation of the event.

Summary At The End Of Each Session

You do this by reminding everyone at the beginning of the session that at the end you will enter the room and ask them to summarize their observations and recommendations.

You then end the session by asking each stakeholder the following:

  • What were their key observations (see also Fig. 3)?
  • What happened during the session?
  • Were there any major difficulties for the participant?
  • What were the things that worked well?
  • Was there anything that surprised them?
  • This will make the team more attentive during the session, as they know that they will need to sum it up at the end. It will also help them to internalize the observations (and later, transition more easily to findings).

    This is also the time to consistently share with your team what you think stands out from the study so far. Avoid the temptation to do a ‘big reveal’ at the end. It’s better if outcomes are told to stakeholders many times.

    On multiple occasions, research has given me great outcomes. Instead of sharing them regularly, I keep them to myself until the final report. It doesn’t work well. A big reveal at the end leads to bewildered stakeholders who often cannot jump from observations to insights as quickly. As a result, there is either stubborn pushback or indifferent shrugs.

    Summary At The End Of The Day

    A summary of the event or the day can then naturally transition into a collaborative analysis session. Your job is to moderate the session.

    The job of your stakeholders is to summarize the events of the day and the final results. Ask a volunteer to talk the group through what happened during the day. Other stakeholders can then add to these observations.

    Summary At The End Of The Study

    After the analysis is done, ask one or two stakeholders to summarize the study. Make sure they cover why we did research, what happened during the study and what are the primary findings. They can also do this by walking through the project wall (if you have one).

    It’s very difficult not to talk about your research and leave someone else to do it. But it’s worth it. No matter how much you’re itching to do this yourself — don’t! It’s a great opportunity for people to internalize research and become comfortable with the process. This is one of the key moments to turn stakeholders into active advocates of user research.

    At the end of this stage, you should have 5-7 findings that capture the study.

    Stage 4: Translate (Make Stakeholders Active Champions Of The Solution)

    “Research doesn’t have a value unless it results in decisions and actions.”

    —Lang and Howell (2017).

    Even when you agree with the findings, stakeholders might still disagree about what the research means or lack commitment to take further action. This is why after summarizing, ask your stakeholders to work with you and identify the “Now what?” or what it all means for the organization, product, service, team and/or individually for each one of them.

    Translate in FAST UX Research; the fourth stage in FAST UX Research. Fig. 7. Translate in FAST UX Research is about asking the team or individual stakeholders to discuss each of the findings and articulate how it will impact the business, the service, and product or their work.

    Traditionally, it was the UX researchers’ job to write clear, precise, descriptive findings, and actionable recommendations. However, if the team and stakeholders are not part of identifying actionable recommendations, they might be resistant towards change in future.

    To prevent later pushback, ask stakeholders to identify the “Now what?” (also referred to as ‘actionable recommendations’). Together, you’ll be able to identify how the insights and findings will:

    • Affect the business and what needs to be done now;
    • Affect the product/service and what changes do we need to make;
    • Affect people individually and the actions they need to take;
    • Lead to potential problems and challenges and their solutions;
    • Help solve problems or identify potential solutions.

    Stakeholders and the team can translate the findings at the end of a collaborative analysis session.

    If you decide to separate the activities and conduct a meeting in which the only focus is on actionable recommendations, then consider the following format:

    1. Briefly talk through the 5-7 main findings from the study (as a refresher if this stage is done separately from the analysis session or with other stakeholders).
    2. Split the group into teams and ask them to work on one finding/problem at a time.
    3. Ask them to list as many ways they see the finding affecting them.
    4. Ask one person from each group to present the findings back to the team.
    5. Ask one/two final stakeholders to summarize the whole study, together with the methods, findings, and recommendations.

    Later, you can have multiple similar workshops; this is how you get to engage different departments from the organization.

    Fast UX In Practice

    An excellent example of a FAST UX Research approach in practice is a project I was hired to carry out for a central UK government department. The ultimate goal of the project was to identify user requirements for a very complex internal system.

    At first sight, this was a very challenging project because:

    • There was no time to get to know the department or the client.
      Usually, I would have at least a week or two to get to know the client, their needs, opinions, internal pressures, and challenges. For this project, I had to start work on Monday with a team I had never met; in a building I had never worked, in a domain I knew little about, and finish on Friday the same week.
    • The system was very complex and required intense research.
      The internal system and the nature of work were very complex; this required gathering data with at least a few research methods (for triangulation).
    • This was the first time the team had worked with a UX Researcher.
      The stakeholders were primarily IT specialists. However, I was lucky that they were very keen and enthusiastic to be involved in the project and get their hands dirty.
    • Stakeholder availability.
      As is the case on many other projects, all stakeholders were extremely busy as they had their own work on top of the project. Nonetheless, we made it work, even if it meant meeting over lunch, or for a 15-minute wrap up before we went home.
    • There were internal pressures and challenges.
      As with any department and huge organization, there were a number of internal pressures and challenges. Some of them I expected (e.g. legacy systems, slow pace of change) but some I had no clue about when I started.
    • We had to coordinate work with external teams.
      An additional challenge was the need to work with and coordinate efforts with external teams at another UK department.

    Despite all of these challenges, this was one of the most enjoyable projects I have worked on because of the tight collaboration initiated by the FAST approach.

    The project consisted of:

    • 1 day of kick-off sessions and getting to know the team
    • 2,5 days of contextual inquiries and shadowing of internal team members,
    • Half a day for a co-creation workshop, and
    • 1 day for analysis and results reporting.

    In the process, I gathered data from 20+ employees, had 16+ hours of observations, 300+ photos and about 100 pages of notes. Here is a great example of cramming in 3 weeks’ worth of work into a mere 5-day research cycle. More importantly, people in the department were really excited about the process.

    Here is how we did it using a FAST UX Research approach:

    • Focus
      At the beginning of the project, the two key stakeholders identified what the focus of research would be while my role was mainly to help with prioritizing the objectives, tweak the research questions and check for feasibility. In this sense, I listened and mainly asked questions, interjecting occasionally with examples from previous projects or options that helped to adjust our approach.

      While I wrote the main discussion guide for the contextual inquiries and shadowing sessions, we sat together with the primary team to discuss and design the co-creation workshop with internal users of the system.
    • Attend
      During the workshop one of the stakeholders moderated half of the session, while the other took notes and observed closely the participants. It was a huge success internally, as stakeholders felt there was better visibility for their efforts to modernize the department, while employees felt listened to and involved in the research.
    • Summarize
      Immediately after the workshop, we sat together with the stakeholders for a 30-minute meeting where I had them summarize their observations.

      As a result of the shadowing, contextual inquiries and co-creation workshop, we were able to identify 60+ issues and problems with the internal system (with regards to integration, functionality, and usability), all captured in six high-level findings.
    • Translate
      Later, we discussed with the team how each of the six major findings translated to a change or implication for the department, the internal system, as well as collaboration with other departments.

    We were so perfectly aligned with the team that when we had to talk about our work in front of another UK government department, I could let the stakeholders talk about the process and our progress.

    My final task (over two additional days) was to document all of the findings in a research report. This was necessary as a knowledge repository because I had to move onto other projects.

    With a more traditional approach, the project could have easily spanned 3 weeks. More importantly, quickly understanding individual and team pressures and challenges were the keys to the success of the new system. This could not have happened within the allocated time without a collaborative approach.

    A FAST UX approach resulted in tight collaboration, strong co-ownership and a shared sense of progress; all of those allowed to shorten the time of the project, but also to instill a feeling of excitement about the UX research process.

    Have You Tried It Out Already?

    While UX research becomes ever more popular, gone are the days when we could soldier on by ourselves and only consult stakeholders at the end.

    Mastering our craft as UX researchers means engaging others within the process and being articulate, clear, and transparent about our work. The FAST approach is a simple model that shows how to engage non-researchers with the research process. Reducing the time it takes to do research, both in the short (i.e. the study itself) and long term (i.e. using the research results), is a strategic advantage for the researcher, team, and the business as a whole.

    Would you like to improve your efficiency and turn stakeholders into user research advocates? Go and try it out. You can then share your stories and advice here.

    I would love to hear your comments, suggestion and any feedback you care to share! If you have tried it out already, do you have success stories you want to share? Be as open as you can — what worked well, and what didn’t? As with all other things UX, it’s most fun if we learn together as a team.

    Smashing Editorial(cc, ra, il)
    Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
    Could not load more posts
    Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
    Just a second, loading more posts...
    You've reached the end.

    Don't be the product, buy the product!

    Schweinderl