What next for grads? Find out in Computer Arts 294

What’s next? It’s the question on many a graduate’s mind at the moment. With grad shows being disassembled as quickly as they were set up, for many the next couple of months will be a time of reflection of the last three years, and some sober planning for the future. 

We’re here to help with this issue of Computer Arts! We explore the realities facing creative graduates in the UK right now, and speak to industry insiders about what steps to take – and why to take them.

And to make the issue’s cover as engaging as possible, the cover lines are all hidden! With the help of our cover treatment partners Celloglas, you can scratch the foil bars off to find out out just exactly what we’ve got in store this issue (or read below).

Buy Computer Arts issue 294 now

What next for grads in Computer Arts issue 294

Discover what the next steps into the industry are

Elsewhere in the mag we visit the studios of 2018’s Brand Impact Awards Best in Show super-power Superunion. The team discuss their recent D&AD pencil-winning work on the BBC 2 rebranding in depth, and reveal why they aim to retain the feel, and ethos, of a boutique studio.

We also chat to Emily Forgot about her design/art/illustrations, check out a new digital exhibition that calls out misogyny while pushing the case for climate change, speak to Michael Johnson about his new book on where the best ideas come from, and pick renowned design writer Stephen Heller’s brains on the three concepts that have helped graphic design evolve. 

Save up to 70% on a subscription now!

There’s also loads more inside, including a look at the key objects that have inspired the lives of eight top creatives. Have a look at the gallery below, and then grab your copy today! 

Computer Arts is the world’s best-selling design magazine, bursting at the seams with insight, inspiration, interviews and all the best new design projects. For all this delivered direct to your door each month, subscribe to Computer Arts. Subscribe today, and you can save up to 70 per cent off the cover price!

Related articles:

Facebook’s Calibra logo looks suspiciously like the Current logo

Facebook’s new cryptocurrency, Libra, has come under fire from several angles. First of all, there’s the concept itself – a bank and currency owned by Facebook. Then there’s the logo for Libra, which many felt had seriously missed a trick by not featuring a pair of scales like its starsign namesake, and then, there’s the logo for Libra’s digital wallet, Calibra, which looks rather like the logo for the bank Current.

Current, which calls itself ‘the bank for modern life’ posted a cutting tweet last week of its logo next to Calibra’s, with the comment: “This is what happens when you only have 1 crayon left.” 

Perhaps the designers could have benefited from reading our guide to logo design.

It’s hard to deny that the logos are similar. The Current logo features a tilde in a circle, and the Calibra logo features, well… a tilde in a circle. There are, of course, differences including that the Current logo’s circle is a gradient, while Calibra’s logo is monochrome, hence the ‘one crayon left’ comment. 

So was this an honest mistake? 

That’s where the plot thickens. Current’s logo was designed by San Francisco design firm, Character. And Stuart Sopp, CEO of Current, told CNBC that Character also created the logo for Facebook’s Calibra. The co-founder of Character also posted on LinkedIn about working on a secret crypto project with Facebook – the post has since been deleted.

“We put six months of hard work into this with that design firm, which they basically reused for Facebook without changing much,” Sopp told CNBC. “Facebook is a big company that should have done their due diligence on this.”

According to CNBC, Current has asked a law firm, Goodwin Procter, to determine if it has a trademark or patent infringement case. 

Will this copyright case have much impact on the social media giant and its plans for financial domination? We can’t help but doubt it. Although one Twitter user has proposed an alternative logo, just in case Facebook is looking:

Read more:

Forget Amazon Prime… save £100 on an iPad Pro right now!

If you’re after discounts on the best tablets around, this offer could be just what you’ve been waiting for. Apple’s iPad Pro is a popular digital canvas among creatives due to its balance of power and portability. And thanks to online retailer Very, you can get £100 credit back when you buy a select model via a twelve month buy now, pay later payment plan. And it’s not just iPad Pros that after on offer. The online retailer is also offering money back on the Apple Mac, iPhone XR, the iPhone 8, and Apple Watch Series 3 and more.

These tempting offers comes ahead of Amazon Prime Day 2019. Around this time of year we typically see retailers whipping out juicy offers in a bid to beat the best Prime Day deals that are just around the corner. And while we don’t know for certain if Amazon can beat this deal on Prime Day, we’re betting Very is doing all they can to compete with it.

Get up to £150 back on selected Apple products

The models on offer from Very include the best-selling 10.5 inch iPad Pro 2017 and the more recent 12.9 inch iPad Pro. But there are dozens more Apple products on offer so head over to Very to check out the full selection. 

If you’re interested in this iPad Pro deal, don’t hang around. You have until 27 June 2019 21.00 BST to get £100 credit back on an iPad Pro. Here’s how to get the offer:

1. Buy an eligible Apple iPad Pro on 12 months Buy Now Pay Later. All eligible products are contained within the offer page, access via shop now. (Offer not valid if you choose 6 or 9 months Buy Now Pay Later).
2. By entering P7ANV in the promo code box at checkout your item will be put on 12 months Buy Now Pay Later automatically. Very will then credit £100 back to your account by 8th August 2019 or the dispatch date, whichever is later. Very will email you when we have applied the credit to your account. Order must be placed online by 9pm on the final day of the promotion. If you return your item, the credit will be reversed. This can’t be used in conjunction with other offers (entering any other promo code apart from P7ANV in checkout will exclude you from this promotion).

Read more:

What I Learned From Designing AR Apps

What I Learned From Designing AR Apps

What I Learned From Designing AR Apps

Gleb Kuznetsov

The digital and technological landscape is constantly changing — new products and technologies are popping up every day. Designers have to keep track of what is trending and where creative opportunities are. A great designer has the vision to analyze new technology, identify its potential, and use it to design better products or services.

Among the various technologies that we have today, there’s one that gets a lot of attention: Augmented Reality. Companies like Apple and Google realize the potential of AR and invest significant amounts of resources into this technology. But when it comes to creating an AR experience, many designers find themselves in unfamiliar territory. Does AR require a different kind of UX and design process?

As for me, I’m a big fan of learning-by-doing, and I was lucky enough to work on the Airbus mobile app as well as the Rokid AR glasses OS product design. I’ve established a few practical rules that will help designers to get started creating compelling AR experiences. The rules work both for mobile augmented reality (MAR) and AR glasses experiences.

Rokid Glasses motion design exploration by Gleb Kuznetsov


Let’s quickly define the key terms that we will use in the article:

  • Mobile Augmented Reality (MAR) is delivering augmented reality experienced on mobile devices (smartphones and tablets);
  • AR Glasses are a wearable smart display with a see-through viewing an augmented reality experience.

1. Get Buy-In From Stakeholders

Similar to any other project you work for, it is vital that you get support from stakeholders as early in the process as is possible. Despite being buzzed about for years, many stakeholders have never used AR products. As a result, they can question the technology just because they don’t understand the value it delivers. Our objective is to get an agreement from them.

“Why do we want to use AR? What problem does it solve?” are questions that stakeholders ask when they evaluate the design. It’s vital to connect your design decisions to the goals and objectives of the business. Before reaching stakeholders, you need to evaluate your product for AR potential. Here are three areas where AR can bring a lot of value:

  • Business Goals
    Understand the business goals you’re trying to solve for using AR. Stakeholders always appreciate connecting design solutions to the goals of the business. A lot of time business will respond to quantifiable numbers. Thus, be ready to provide an explanation off how your design is intended to help the company make more money or save more money.
  • Helpfulness For Users
    AR will provide a better user experience and make the user journey a lot easier. Stakeholders appreciate technologies that improve the main use of the app. Think about the specific value that AR brings to users.
  • Creativity
    AR is excellent when it comes to creating a more memorable experience and improving the design language of a product. Businesses often have a specific image they are trying to portrait, and product design has to reflect this.

Only when you have a clear answer to the question “Why is this better with AR?”, you will need to share your thoughts with stakeholders. Invest your time in preparing a presentation. Seeing is believing, and you’ll have better chances of buy-in from management when you show a demo for them. The demo should make it clear what are you proposing.

2. Discovery And Ideation

Explore And Use Solutions From Other Fields

No matter what product you design, you have to spend enough time researching the subject. When it comes to designing for AR, look for innovations and successful examples with similar solutions from other industries. For example, when my team was designing audio output for AR glasses, we learned a lot from headphones and speakers on mobile phones.

Design User Journey Using “As A User I Want” Technique

One of the fundamental things you should remember when designing AR experiences is that AR exists outside of the phone or glasses. AR technology is just a medium that people use to receive information. The tasks that users want to accomplish using this technology are what is really important.

“How to define a key feature set and be sure it will be valuable for our users?” is a crucial question you need to answer before designing your product. Since the core idea of user-centered design is to keep the user in the center, your design must be based on the understanding of users, their goals and contexts of use. In other words, we need to embrace the user journey.

When I work on a new project, I use a simple technique “As a [type of user], I want [goal] because [reason].” I put myself in the user’s shoes and think about what will be valuable for them. This technique is handy during brainstorming sessions. Used together with storyboarding, it allows you to explore various scenarios of interaction.

In the article “Designing Tomorrow Today: the Airbus iflyA380 App,” I’ve described in detail the process that my team followed when we created the app. The critical element of the design process was getting into the passenger’s mind, looking for insights into what the best user experience would be before, during and after their flight.

To understand what travelers like and dislike about the travel experience, we held a lot of brainstorming sessions together with Airbus. Those sessions revealed a lot of valuable insights. For example, we found that visiting the cabin (from home) before flying on the A380 was one of the common things users want to do. The app uses augmented reality so people can explore the cabin and virtually visit the upper deck, the cockpit, the lounges — wherever they want to go — even before boarding the plane.

IFLY A380 iOS app design by Gleb Kuznetsov
IFLY A380 iOS app design by Gleb Kuznetsov. (Large preview)

App also accompanies passengers from the beginning to the end of their journey — basically, everything a traveler wants to do with the trip is wrapped up in a single app. Finding your seat is one of the features we implemented. This feature uses AR to show your seat in a plane. As a frequent traveler, I love that feature; you don’t need to search for the place at the time when you enter the cabin, you can do it beforehand — from the comfort of your couch. Users can access this feature right from the boarding pass — by tapping on ‘glass’ icon.

IFLY A380 app users can access the AR feature by tapping on the ‘glass’ icon

IFLY A380 app users can access the AR feature by tapping on the ‘glass’ icon. (Large preview)

Narrow Down Use Cases

It might be tempting to use AR to solve a few different problems for users. But in many cases, it’s better to resist this temptation. Why? Because by adding too many features in your product, you make it not only more complex but also more expensive. This rule is even more critical for AR experience that generally requires more effort. It’s always better to start with simple but well-designed AR experience rather than multiple complex but loose designed AR experiences.

Here are two simple rules to follow:

  • Prioritize the problems and focus on the critical ones.
  • Use storyboarding to understand exactly how users will interact with your app.
  • Remember to be realistic. Being realistic means that you need to strike a balance between creativity and technical capabilities.

Use Prototypes To Assess Ideas

When we design traditional apps, we often use static sketches to assess ideas. But this approach won’t work for AR apps.

Understanding whether a particular idea is good or bad cannot be captured from a static sketch; quite often the ideas that look great on paper don’t work in a real-life context.

Thus, we need to interact with a prototype to get this understanding. That’s why it’s essential to get to prototyping state as soon as possible.

It’s important to mention that when I say ‘prototyping state’ I don’t mean a state when you create a polished high-fidelity prototype of your product that looks and work as a real product. What I mean is using a technique of rapid prototyping and building a prototype that helps you experience the interaction. You need to make prototypes really fast — remember that the goal of rapid prototyping is in evaluating your ideas, not in demonstrating your skills as a visual designer.

3. Design

Similar to any other product you design, when you work on AR product, your ultimate goal is to create intuitive, engaging, and clean interface. But it can be challenging since the interface in AR apps accounts both for input and output.

Physical Environment

AR is inherently an environmental medium. That’s why the first step in designing AR experience is defining where the user will be using your app. It’s vital to select the environment up front. And when I say ‘environment’, I mean a physical environment where the user will experience the app — it could be indoors or outdoors.

Here are three crucial moments that you should consider:

  1. How much space users need to experience AR? Users should have a clear understanding of the amount of space they’ll need for your app. Help users understand the ideal conditions for using the app before they start the experience.
  2. Anticipate that people will use your app in environments that aren’t optimal for AR. Most physical environments can have limitations. For example, your app is AR table tennis game but your users might not have a large horizontal surface. In this case, you might want to use a virtual table generated based on your device orientation.
  3. Light estimation is essential. Your app should analyze the environment automatically and provide contextual guidance if the environment is not good enough. If the environment is too dark or too bright for your app, tell the user that they should find a better place to use your app. ARCore and ARKit have a built-in system for light estimation.

When my team designed Airbus i380 mobile AR experience, we took the available physical space into account. Also, we’ve considered the other aspects of interaction, such as the speed at which the user should make decisions. For instance, the user who wants to find her seat during the boarding won’t have too much time.

We sketched the environment (in our case, it was a plane inside and outside) and put AR objects in our sketch. By making our ideas tangible, we got an understanding of how the user will want to interact with our app and how our app will adapt to the constraints of the environment.

AR Realism And AR Objects Aesthetics

After you define the environment and required properties, you will need to design AR objects. One of the goals behind creating AR experience is to blend virtual with real. The objects you design should fit into the environment — people should believe that AR objects are real. That’s why it’s important to render digital content in context with the highest levels of realism.

Here are a few rules to follow:

  • Focus on the level of details and design 3D assets with lifelike textures. I recommend using multi-layer texture model such as PBR (Physically Based Rendering model). Most AR development tools support it, and this is the most cost-effective solution to achieve an advanced degree of detail for your AR objects.
  • Get the lighting right. Lighting is a very crucial factor for creating realism — the wrong light instantly breaks the immersion. Use dynamic lighting, reflect environmental lighting conditions on virtual objects, cast object shadows, and reflections on real-world surfaces to create more realistic objects. Also, your app should react to real-world changing of lighting.
  • Minimize the size of textures. Mobile devices are generally less powerful than desktops. Thus, to let your scene load faster, don’t make textures too large. Strive to use 2k resolution at most.
  • Add visual noise to AR textures. Flat-colored surfaces will look fake to the user’s eye. Textures will appear more lifelike when you introduce rips, pattern disruptions, and other forms of visual noise.
  • Prevent flickering. Update the scene 60 times per second to prevent flickering of AR objects.

Design For Safety And Comfort

AR usually accompanied by the word ‘immersive.’ Creating immersive experience is a great goal, but AR immersion can be dangerous — people can be so immersed in smartphones/glasses, so they forget what is happening around them, and this can cause problems. Users might not notice hazards around them and bump into objects. This phenomenon is known as cognitive tunneling. And it caused a lot of physical traumas.

  • Avoid users from doing anything uncomfortable — for example, physically demanding actions or rapid/expansive motion.
  • Keep the user safe. Avoid situations when users have to walk backward.
  • Avoid long play AR sessions. Users can get fatigued using AR for extended periods. Design stop points and in-app notifications that they should take a break. For instance, if you design an AR game, let users pause or save their progress.

Placement For Virtual Objects

There are two ways of placing virtual objects — on the screen or in the world. Depending on the needs of your project and device capabilities, you can follow either the first or second approach. Generally, virtual elements should be placed in world space if they suppose to act like real objects (e.g., a virtual statue in AR space), and should be placed as an on-screen overlay if they intended to be UI controls or information messages (e.g., notification).

Rokid Glasses

Rokid Glasses. (Large preview)

‘Should every object in AR space be 3D?’ is a common question among designers who work on AR experiences. The answer is no. Not everything in the AR space should be 3D. In fact, in some cases like in-app notifications, it’s preferable to use flat 2D objects because they will be less visually distracting.

Rokid Glasses motion design exploration by Gleb Kuznetsov
Rokid Glasses motion design exploration by Gleb Kuznetsov. (Large preview)

Avoid Using Haptic Feedback

Phone vibrations are frequently used to send feedback in mobile apps. But using the same approach in AR can cause a lot of problems — haptic feedback introduces extra noise and makes the experience less enjoyable (especially for AR Glasses users). In most cases, it’s better to use sound effect for feedback.

Make A Clear Transition Into AR

Both for MAR and AR glass experiences, you should let users know they’re about to transition into AR. Design a transition state. For the ifly380 app, we used an animated transition — a simple animated effect that user sees when taps on the AR mode icon.

Trim all the fat.

Devote as much of the screen as possible to viewing the physical world and your app’s virtual objects:

  • Reduce the total number of interactable elements on the screen available for the user at one moment of time.
  • Avoid placing visible UI controls and text messages in your viewport unless they are necessary for the interaction. A visually clean UI lends itself seamlessly to the immersive experience you’re building.
  • Prevent distractions. Limit the number of times when objects appear on the user screen out of the blue. Anything that appears out of the blue instantly kills realism and make the user focus on the object.

AR Object Manipulation And Delineating Boundaries Between The ‘Augment’ And The ‘Reality’

When it comes to designing a mechanism of interaction with virtual objects, favor direct manipulation for virtual objects — the user should be able to touch an object on the screen and interact with it using standard, familiar gestures, rather than interact with separate visible UI controls.

Also, users should have a clear understanding of what elements they can interact with and what elements are static. Make it easy for users to spot interactive objects and then interact with them by providing visual signifiers for interactive objects. Use glowing outlines or other visual highlights to let users know what’s interactive.

Scan object effect for outdoor MAR by Gleb Kuznetsov
Scan object effect for outdoor MAR by Gleb Kuznetsov. (Large preview)

When the user interacts with an object, you need to communicate that the object is selected visually. Design a selection state — use either highlight the entire object or space underneath it to give the user a clear indication that it’s selected.

Last but not least, follows the rules of physics for objects. Just like real objects, AR objects should react to the real-world environment.

Design For Freedom Of Camera

AR invites movement and motion from the user. One of the significant challenges when designing or AR is giving users the ability to control the camera. When you give users the ability to control the view, they will swing device around in an attempt to find the points of interest. And not all apps are designed to help the user to control the viewfinder.

Google identifies four different ways that a user can move in AR space:

  1. Seated, with hands fixed.
  2. Seated, with hands moving.
  3. Standing still, with hands fixed.
  4. Moving around in a real-world space.

The first three ways are common for mobile AR while the last one is common for AR glasses.

In some cases, MAR users want to rotate the device for ease of use. Don’t interrupt the camera with rotation animation.

Consider Accessibility When Designing AR

As with any other product we design, our goal is to make augmented reality technology accessible for people. Here are a few general recommendations on how to address real-world accessibility issues:

  • Blind users. Visual information is not accessible to blind users. To make AR accessible for blind users, you might want to use audio or haptic feedback to deliver navigation instructions and other important information.
  • Deaf or hard-hearing users. For AR experience that requires voice interaction, you can use visual signals as an input method (also known as speechreading). The app can learn to analyze lip movement and translate this data in commands.

If you’re interested in learning more practical tips on how to create accessible AR apps, consider watching the video talk by Leah Findlater:

Encourage Users To Move

If your experience demands exploration, remind users they can move around. Many users have never experienced a 360-degree virtual environment before, and you need to motivate them to change the position of their device. You can use an interactive object to do that. For example, during I/0 2018, Google used an animated fox for Google Maps that guided users to the target destination.

This AR experience uses an animated bird to guide users

This AR experience uses an animated bird to guide users. (Large preview)

Remember That Animation Is A Designer’s Best Friend

Animation can be multipurpose. First, you can use a combination of visual cues and animation to teach users. For example, the animation of a moving phone around will make it clear what users have to do to initialize the app.

Second, you can use animation to create emotions.

One second of emotion can change the whole reality for people engaging with a product.

Well-designed animated effects help to create a connection between the user and the product — they make the object feel tangible. Even a simple object such as loading indicator can build a bridge of trust between users and the device.

Rokid Alien motion design by Gleb Kuznetsov
Rokid Alien motion design by Gleb Kuznetsov. (Large preview)

A critical moment about animation — after discovering the elements of design and finding design solutions for the animation base, it’s essential to spend enough time on creating a proper animated effect. It took lots of iterations to finish a loading animation that you see above. You have to test every animation to be sure it works for your design and be ready to adjust color, positioning, etc. to give the best effect.

Prototype On The Actual Device

In the interview for Rokid team, Jeshua Nanthakumar mentioned that the most effective AR prototypes are always physical. That’s because when you prototype on the actual device, from the beginning, you make design work well on hardware and software that people actually use. When it comes to unique displays like on the Rokid Glasses, this methodology is especially important. By doing that you’ll ensure your design is implementable.

Motion design language exploration for AR Glasses Rokid by Gleb Kuznetsov
Motion design language exploration for AR Glasses Rokid by Gleb Kuznetsov. (Large preview)

My team was responsible for designing the AR motion design language and loading animation for AR glasses. We decided to use a 3D sphere that will be rotated during the loading and will have nice reflections on its edges. The design of the animated effect took two weeks of hard work of motion designers and it looked gorgeous on high-res monitors of our design team, but the final result wasn’t good enough because the animation caused motion sickness.

Motion sickness often caused by the discrepancies between the motion perceived from the screen of AR Glasses and the actual movement of the user’s head. But in our case, the root cause of the motion sickness was different — since we put a lot of attention in polishing details like shapes, reflections, etc. unintentionally we made users focus on those details while the sphere was moving.

As a result, the motion happened in the periphery, and since humans are more sensitive to the moving objects in the periphery this caused motion sickness. We solved this problem by simplifying the animation. But it’s critical to mention that we won’t be able to find this problem without testing on the actual device.

If we compare the actual procedure of testing of AR apps with traditional GUI apps, it will be evident that testing AR apps require more manual interactions. A person who conducts testing should determine whether the app provides the correct output based on the current context.

Here are a few tips that I have for conducting efficient usability testing sessions:

  • Prepare a physical environment to test in. Try to create real-world conditions for your app — test it with various physical objects, in different scenes with different lighting. But the environment might not be limited to scene and lighting.
  • Don’t try to test everything all at once. Use a technique of chunking. Breaking down a complex flow into smaller pieces and testing them separately is always beneficial.
  • Always record your testing session. Record everything that you see in the AR glass. A session record will be beneficial during discussions with your team.
  • Testing for motion sickness.
  • Share your testing results with developers. Try to mitigate the gap between design and development. Make sure your engineering team knows what problem you face.


Similar to any other new technology, AR comes with many unknowns. Designers who work on AR projects have a role of explorers — they experiment and try various approaches in order to find the one that works best for their product and delivers the value for people who will use it.

Personally, I believe that it’s always great to explore new mediums and find new original ways of solving old problems. We are still at the beginning stages of the new technological revolution — the exciting time when technologies like AR will be an expected part of our daily routines — and it’s our opportunity to create a solid foundation for the future generation of designers.

Smashing Editorial(cc, yk, il)

Master audio production with this training bundle

If you want to make it as a creative in the audio and visual mediums, you’re going to want to learn how to perfect a sound mix. Luckily, The Ultimate Logic Pro X Music Production Bundle contains everything you need to know about the basics of audio engineering you podcasts, YouTube videos, and more.

In this course, you’ll learn to create sound mixes that work well for podcasts, audiobooks, tutorials, and plenty of other platforms. By using Logic Pro X, you’ll become skilled at the same tool used by music producers and audio engineers.

On top of that, you’ll find out how to create the highest quality mix for your voice, and build a voiceover track from scratch.

As if that wasn’t exciting enough, this bundle has dropped in price from $690 all the way down to $19!

Related articles:

Get industry-ready with Springboard’s UX bootcamp

Springboard’s UX Career Track is a web bootcamp with a difference. It has been designed with careers in mind, so you’ll come away armed with everything you need to snap up your first paid job in this in-demand sector. There’s an intensive online course for you to work through at your own pace, a dedicated mentor for guidance, and an industry design project to help you apply your skills in the real world. And if you don’t get a job within six months of completing the course, Springboard will give you your money back.

There are plenty of design courses out there, but many are long and very expensive, and few guarantee a job at the end. Springboard’s mission is to bridge the skills gap. Its UX Career Track gives you the skills, support and experience you need to snag a top job in UX at a reasonable price, in just six months. 

What can I expect from the course?

Springboard’s courses encourage you to learn the knowledge you need at your own pace by working through its online classes, which include articles, video tutorials, coursework and hands-on projects. The programme has been shaped with input from hiring managers, so it covers the skills companies are actually looking for right now, including user research, design thinking, prototyping and design sprints. 

The course should take around 15-20 hours a week, but this flexibility means you can fit your study around your commitments, without feeling rushed or having to worry about missing things. 

The course combines articles, video tutorials and coursework

You won’t be on your own, either. You’ll have a mentor assigned to you, who’ll support and guide you through weekly calls. They will help you set, track, and meet goals, and make sure your progress stays on track. There’s also a career coach to help you navigate the tricky world of recruitment and make sure you find the right job for you. 

It’s not all theoretical: you’ll spend 40 hours working with a real client on a design problem. The aim of this industry design project is to help put your new skills and knowledge into practice. You’ll use it to start building your portfolio, and come away with the real-world experience crucial for you to hit the ground running in your new career. 

Money-back guarantee

If you don’t get a job within six months, Springboard will actually refund your tuition fees. Past Springboard graduates have snagged jobs at companies including Facebook, Google and IBM, so we’re talking major roles here, too.

The course is ideally suited to those with a background in graphic design, social sciences, or web development. If that sounds like you, you could be turning your existing skills into a brand new career in this exciting, in-demand sector in no time. Visit Springboard to find out more about the UX Career Track, and sign up.

Bauhaus design: a guide to the design movement

The Bauhaus was a state-funded school set up by by architect Walter Gropius in 1919. His mission, which became clearer when the school began writing manifestoes about its purpose, was to use the visual arts to bring about a better society. He thought the way to do this was to break down the hierarchies of the creative world, which would mirror German society at the time. 

The Bauhaus would smash through the divisions between fine and applied arts, and develop a new aesthetic: made for the people. In real terms, this meant that crafts such as ceramics, print-making, textiles and metalworking would be afforded the same status as painting and sculpture. Later on, photography and graphics would be added to the mix, with a new focus placed on function and, ultimately, design. This has led to the design world we now know, and many of the best graphic design portfolios are influenced by the movement, whether their creators realise it or not.

Over the years, The Bauhaus existed in three different German cities: Weimar (1919-1925), Dessau (1925-1932) and Berlin (1932-1933). The Bauhaus was unique at the time because it asked how the ‘modernisation process could be mastered by means of design’. 

This year marks 100 years since the school’s opening. Here, we take a quick look at the design movement, including some of the trends and philosophies connected to the school, as well as the Bauhaus logo, and ultimately the closure of the school. 

Bauhaus design: Mass production

Barcelona chair

Barcelona Chair designed by Ludwig Mies van der Rohe and Lilly Reich

Gropius realised machines offered a great opportunity to mass-produce appealing and practical products. The Bauhaus vision was to embrace the new technological developments unifying art, craft, and technology. It was primarily focused on clean geometric forms and balanced visual compositions.

The results were both both beautiful and simplistic, from the modern Barcelona Chair designed by Ludwig Mies van der Rohe and Lilly Reich to abstracted line-form paintings by Wassily Kandinsky. Each practice was examined, explored and experimented further by both the students and encouraging tutors.

For objects you can buy today in the style of the Bauhaus, see our post on objects to bring Bauhaus style to your studio .

Bauhaus design: Futuristic trends

typography treatment by Moholy-Nagy

Strong black and red typography treatment by Moholy-Nagy

Futuristic designs for the real world were being considered with various mediums including wood, metal and glass. Graphic designers such as Moholy-Nagy, avid user of red and experimental layouts, set strong design trends. He was not shy to augment the typography by standing it vertically or diagonally on the page – as designers, we know this is a difficult technique to implement.

Maholy-Nagy’s work influenced, and was influenced by Jan Tschichold, who championed a new movement in typography and wrote many of the rules of graphic design that are still there for us to break to this day. He looked at posters, pages and double-page spreads structurally, considered the benefits or disadvantages of symmetry and asymmetry, and introduced the concept of balancing headlines and body of text along with images as forms on the layout.

Bauhaus design: Typography

Another key designer in the Bauhaus movement was Herbert Bayer, known for developing the typeface Universal. This ‘universal’ alphabet was commissioned by Walter Gropius in 1925 for exclusive Bauhaus use, unfortunately it was never cut as a typeface. The characters are formed from perfect circles, and there’s zero contrast and no embellishment whatsoever. It was meant to be clear, direct and efficient in its communication – an ideological statement of intent. Although its forms lacked balance and failed to achieve the legibility Bayer hoped for, elements of it were drawn into Joe Taylor’s typeface Bauhaus 93 in 1969.

Below is a re-issue of Bayer’s typeface named called Architype Bayer; It was drawn from Bauhaus Archiv sketches, based on his single-alphabet student thesis, and is now available from The Foundry.

Architype Bayer designed by Herbery Baye

Architype Bayer designed by Herbery Bayer, Re-issued for digital by Foundry

Sometimes Bauhaus typefaces are described as Art Deco, but the word ‘mechanistic’ seems more accurate. Beautifully engineered lettering for an age of mass production was part of the aim. Even today when designers wish to express a sense of purity, often the answer is to reduce characters to pure, geometric forms.

Bauhaus design: the Bauhaus logo

the bauhaus school of design

The Bauhaus Dessau Foundation – its lettering is perhaps the closest thing the movement has to a logo

Although the Bauhaus aimed to develop a visual language for the future, the school never had a logo. Arguably, the work spoke for itself and the Bauhaus identity was there to be seen in the beautiful objects created.

Of course, there’s also the distinctive Dessau school building itself, designed by Walter Gropius. Perhaps the closest the school came to an emblem was the side-on face created by Oskar Schlemmer. It became the motif of the movement. 

The machine aesthetic is there to be seen in what is essentially a composition constructed using rectangles, constrained in a perfect circle. In it, there’s the sense that the Bauhaus would reshape the world, and perhaps there is even a hint of Mona Lisa-like contentment in the expression. 

Joost Schmidt's Bauhaus poster

Joost Schmidt’s Bauhaus poster

The form was incorporated into a poster promoting a 1923 Bauhaus exhibition while the school was still in Weimar. With its asymmetric tilted oval form, dashes of geometry and hairline serif type, it feels like a half-way house between the school’s Expressionist beginnings and new functionalist philosophy. Many a design student has mimicked this design in their coursework – and why not? The image was directly lifted and used as a logo by a 1980s British gothic rock band fronted by Peter Murphy, which called itself Bauhaus and was influenced by German Expressionism.

Bauhaus design: Closure of The Bauhaus

Political pressure and constant scrutiny by the Nazi movement (which strongly opposed modernism in favour of classicism) continued to cast a shadow over the school. In 1928 Gropius resigned and was then succeeded by Hannes Meyer. The school carried on with practice as usual.

In the 1930s the Bauhaus received criticism from the Nazi writers Wilhelm Frick and Alfred Rosenberg, labelling the Bauhaus ‘un-German’ and disagreeing with the modernistic styles the school was predominately based on. The writers characterised the Bauhaus as a front for Communists, Russians, and social liberals. Further pressure from the Nazi régime forced the Bauhaus to close on April 11, 1933.

With many design movements, the outcomes look out-dated over the years. In contrast, the Bauhaus philosophy has had a constant influence on all forms of design. Most major cities incorporate design elements from this theory of ‘form follows function’ – such as white walls, clean lines and glass, which is even more impressive when you consider that the school that only existed for fourteen years.

Parts of this article were originally published in Computer Arts magazine; subscribe here.

Read more:

The most powerful laptops in 2019

The most powerful laptops today are capable of handling incredible workloads. In the past, for tasks like video editing or graphic design that required a lot of high-performance hardware, no laptop would have been able to come close to the capabilities of a desktop.

In some way, you would have had to make a compromise: the most powerful laptops were often so big and heavy as to be barely portable, and many would sacrifice battery life completely, lasting no longer than an hour away from a plug socket.

That’s no longer true. The latest generation of processors with six or eight CPU cores means the most powerful laptops can perform every bit as well as desktops. Graphics cards are better – you can now get full-fat desktop cards in some laptops, not the anaemic mobile versions that were once the only option. If you want to compare, take a look at our guide to the best computers for graphic design.

Screens are getting better too. Laptops now come with high-DPI colour accurate displays that look simply amazing, such as Apple’s DCI-P3 Retina displays and the 4K screens on some Windows laptops.

And what’s more, this beastly portable performance won’t break your back. The lower power requirements and more intelligent resource management of modern laptop hardware means manufacturers can opt for more compact cooling systems and physically smaller batteries, resulting in reduced laptop weight and thickness. It’s very impressive how so much computing performance can be squeezed out of such a small space.

We’ve listed some of the most powerful laptops on the market today, and found some great deals so high performance portable computing doesn’t have to cost an arm and a leg.

Dell describes its Precision 5530 as its thinnest, lightest and smallest 15-inch mobile workstation ever, and we’d agree. Available in ‘bright onyx’ or ‘platinum silver’, it’s a lovely machine with a brilliant specification.

The two-piece silver and black chassis looks great, and inside there’s a choice of Core i5, i7 or i9 Intel processors with four or six cores. Up to 2TB of fast NVMe storage is available and unlike many other slim and light laptops, you can add a second hard disk to give you both a fast system drive with additional internal storage for media.

Topping off the specification is an Nvidia Quadro 1000 or 2000 graphics card and an optional 4K touch-sensitive display to offer a premium computing experience.

Apple’s 15-inch MacBook Pro is the most powerful laptop the company has ever made by a wide margin. Earlier this year, Apple began offering a new model with an eight-core 9th generation Intel processor, which is guaranteed to fly through any computing task there is. Whether it’s video editing, 3D design or photo editing, the top-end MacBook Pro will chew through it effortlessly, no matter if you choose the six or eight-core model.

Bump up the graphics card to an AMD Radeon Pro Vega 16, add more storage and boost the memory to 32GB and the MacBook Pro admittedly becomes a seriously pricey proposition. But alongside the fantastic display, excellent trackpad and fantastic battery life, the MacBook Pro is one of the best laptops on the market.

The futuristic looking angled edges on the lid and case of HP’s professional-grade ZBook Studio laptop befit an equally forward-thinking internal specification that features four or six-core Intel processors, up to 4TB of SSD storage (across dual SSDs) and Nvidia Quadro graphics.

While other high-end mobile workstations have fantastic screens, the 15.6-inch 4K HP DreamColor display on the ZBook studio is possibly the best on any laptop. It offers 100% AdobeRGB coverage and 10-bit colour accuracy, which means visibly better colours, in addition to particularly high brightness levels with up to 600nits.

Configurable with up to 64GB of memory, should you need it, this may not be the thinnest or lightest laptop on the market, but it’s certainly one of the most high-end.

The ThinkPad P1 is the real star of Lenovo’s ThinkPad laptop range. It has plenty of CPU power, with up to 6-core Intel Xeon or Core i7 processor, and is considerably thinner, lighter (1.7kg) and more portable that any of the other ThinkPads.

Delve into the optional extras, and you’ll find the killer feature of the ThinkPad P1, a colour-accurate 4K display that looks particularly bright and vibrant, and doesn’t add too much to the price over the standard FullHD display, making it an upgrade we’d recommend.

Lenovo has more than one ultra-powerful laptop in its ThinkPad range. It’s also worth considering the P72, which is one of the most powerful laptops around. This 17-inch 4K mega workstation comes with up to 128GB of memory, 6TB of storage and a mega powerful Nvidia Quadro P5200 graphics card.

Sporting a lovely royal blue chassis, the slim and portable 14-inch Zenbook Pro stands out as both attractive and highly capable, with processor and graphics performance in spades. And it has an interesting second display, a 5.5-inch screen built into the touchpad that can run special Asus-designed apps.

Weighing just 1.6kg, it comes with a quad-core processor and discrete Nvidia GeForce GTX 1050 graphics that will give it a leg up in both creative software and gaming, with up to 16GB of memory and a 1TB SSD. Good specifications for the price that will certainly power any software you might run on it.

The 13.3-inch MSI Prestige P65 is a slightly different class of mobile workstation to the high-end laptops listed above. It’s definitely a powerhouse, with Nvidia GeForce graphics and optional six-core processor. But rather than ultimate colour accuracy, MSI has bestowed the display with a 144Hz refresh rate which is indeed a lovely feature, but is particularly better for gaming on.

The design is great too. The design is one of the most lightweight in a laptop, and the chamfered edges complete an overall premium look and feel that does MSI proud indeed.

Build a voice controlled UI

We’ve seen many new APIs added to the web over the last few years that have really enabled web content to have the same kind of functionality as many apps have had for some time. A relatively new API is the Speech Recognition API, which as you can probably guess, lets you use your text as an input onto the page. It requires a click to start the service and again to stop.

A great case for this might be in allowing accessibility among your users, giving voice input as an alternative to clicking. If your analytics show that you have a lot of mobile browsing, then think how much easier it would be to speak into your phone than using the keyboard. 

There have been predictions that screen-based interfaces might start to disappear within ten years. At first this might sound like science fiction, but as users get more and more comfortable with speech as input through the likes of Alexa and Siri then it stands to reason that this will become pervasive as an input method. The tutorial here will get you up to speed on speech input and then use that to leave product reviews on an ecommerce site.

Download the files for this tutorial.

01. Start the project

Build a voice controlled UI: Start the project

Don’t worry about CSS as that’s already written [Image: Web Designer]

From the project files folder, open the ‘start’ folder in your code IDE and open the ‘speech.html’ file to edit. All the CSS for the project is written as that isn’t the focus of the speech API, so add the link shown here to get the Noto Serif typeface and link up the CSS file.

02. Add the content

The first elements of this will be to have a wrapper in which to hold all of our on-screen content. The first element in here will be a hidden message that tells the user if the Speech API is supported in the browser. This will only be seen if it isn’t. Then a heading tells the user that the form elements that follow will be used for their message.

03. Choose the results

When using the Speech API there are two ways to display the content. In one, text displays when the user has stopped speaking and the ‘listening’ button is clicked off. The other shows words on screen as spoken. This first radio button allows for the final speech result to be shown.

04. Radio two

The second radio button is added here and this one allows the user to select the text to be displayed as they speak. These radio buttons will be picked up by the JavaScript later and used to control the speech input, but for now this allows the user to have an interface to control that.

05. Display the text

Build a voice controlled UI: Display the text

The user’s speech will end up in the ‘transcription’ text-area [Image: Web Designer]

The text that the user speaks into the page will need to be displayed on the screen. Here the text-area is added that has the id of ‘transcription’ — this will be targeted so that the user’s speech ends up here. There’s also a clear button to remove the text.

06. The last interface

Build a voice controlled UI: The last interface

Clicking the speech button starts and stops speech detection [Image: Web Designer]

The final interface elements are added to the screen now. The speech button enables and disables the speech, so it must be clicked before speaking. Clicking again stops it. As this is a relatively new interaction, the log underneath will tell the users what to do.

07. Add Javascript

Now add the script tags before the closing body tag. This is where all of the JavaScript will go. The first two lines grab the page elements with the matching ID and store them in a variable. The transcription is the text result of the speech. The log will update the user with how to use it.

08. Variable results

Using the next few variables, more interface elements are cached into them. The speech button will become a toggle, letting users switch speech on and off., monitored by a Boolean, true/false variable. The clear-all button will delete unsatisfactory speech results.

09. Is it supported?

The first thing our code will do is find out if this speech feature is supported by the user’s browser. If this result comes back as null then the if statement throws up the hidden message, while simultaneously taking the start button away from the interface to stop the speech input.

10. Start the recognition

The speech recognition is started as the ‘else’ for the speech recognition being available. The continuous input is started as that is the default on the radio buttons. The ‘onresult’ function will handle the results of the speech input. This will be added into the transcription’s text field.

11. Final or interim?

The if statement now checks to see if the user wants to display the text as they are talking (interim) or only after they finish speaking (final). You will notice that if it’s interim, each word gets added to the text with the ‘+=’, while the final just dumps the whole text in there.

12. Handling errors

As with most JavaScript APIs there is an error handler that will allow you to decide what to do with any issues that might arise. These are thrown into the ‘log’ div to give feedback to the user, as it is essential that they are aware of what might be going on with the interface.

13. Start speaking!

The event listener here is started when the user clicks the button to start speaking. If the user is not speaking, then the button changes colour to show speaking has started, the variable for speaking is set to true and the ‘interim’ radio button is checked to see if this is the user’s choice for input.

14. Take the input

The ‘try and catch’ statement now starts the speech recognition and tells the user that they should start speaking and that when they are done, ‘click again to stop’. The catch will pick up the error and throw that into the ‘log’ div so that the user can understand what might be wrong.

15. Click to stop

Now when the user clicks to stop talking, the speech recognition is stopped. The button is changed back to green from red while talking. The user interface is updated so that the user is informed that the service has stopped. The speaking variable is set to false, ready to let the user speak again.

16. Clear the text

Build a voice controlled UI: Clear the text

The clear button removes wrongly-interpreted speech [Image: Web Designer]

The final code for this section is just a clear button to remove the speech input text in case it is wrongly interpreted. Save the file and test this in your browser. You will be able to click the button to speak into the computer and see the results.

17. Add purpose

Now as you have a working example, there needs to be some purpose to the interface, so let’s make this so that users can input reviews. Save the page and then choose Save As, with the new name of ‘reviews.html’. Add the following HTML elements just after the


18. Total submission

Build a voice controlled UI: Total submission

The submit button submits inputted speech [Image: Web Designer]

The previous code will hold the reviews. The user will need to submit their speech input, so add the submit button right after the ‘clear text’ button, which will be around line 28 in your code. Then you can move down to the JavaScript for the next step.

19. New interface elements

At the top of your Javascript add the new variables to hold the references to the new interface elements that have just been added. These will provide you with a way to submit and display the results on the screen within the ‘reviews’ section of the page.

20. Submit the entry

Now the code here will handle when the user clicks the submit button, place this right before the ‘clear’ button code, which should be around line 88 in your code. First, a paragraph tag is created and the speech input is subsequently added into this. This will then be added into the ‘review’ section.

21. Final submission

Build a voice controlled UI: Final submission

If you want to store submitted speech you’ll have to use a database [Image: Web Designer]

The date is added so that the review is timestamped into the document. Finally a horizontal rule is added to show where each review ends, then the text is cleared ready for new input. Save the page and test this. You will see that you can now submit your speech into the page as reviews. For persistence you would need to use a database to store these results.

This article was originally published in issue 286 of creative web design magazine Web Designer. Buy issue 286 here or subscribe to Web Designer here.

Related articles:

Pizza Hut brings back iconic red roof logo

Have you ever ordered a pizza then realised that you can’t eat it in one sitting? Chances are you might have saved it overnight then reheated it in the morning for a breakfast treat. That’s sort of what Pizza Hut has done with its logo, by dusting off a design from the 60s and 70s.

The logo in question is the red roof design. This graphic has been the cornerstone of the Pizza Hut brand for decades, although successive iterations have tweaked it so much that the straightforward logo arguably lost its impact.

However, the original red roof design, which was used between 1967 and 1999, still looks timeless. Check out the revived version in action in the video for Pizza Hut’s relaunch of its Cheesy Bites pizza crust (above). It’s a good example of best practice logo design in action: it’s got simple shapes, crisp colours, and a clear message that all come together in one effective piece of branding.

Meanwhile the logo that rolled out in 2014 (below) inverted the colours and to make the red roof white. The accompanying circular graphic brought to mind tomato sauce smeared on a doughy base, but even for the time it looked a little passé.

Old Pizza Hut logo

The previous Pizza Hut logo had been in use since 2014

The new logo isn’t identical to its predecessor (below) though. The colour of the roof has been bumped up from a dull carmine shade and now pops from the screen in a vibrant red.

Traditionalists will be happy to see that the flowing serifed lettering is still present and correct. It’s also now accompanied by the slogan ‘No one outpizzas the hut’, but the less said about that the better.

Pizza Hut logo

The 1967-1999 logo makes a welcome return

So why did Pizza Hut bring back the design? According to its chief brand officer Marianna Radley, it was because the chain wanted to reconnect with its roots and be “a little braver, a little bolder in our choices”.

Given that Pizza Hut is the first national US pizza chain, it’s got a lot of legacy to draw on. Lately it has been losing ground though, with Dominos overtaking it to become the largest pizza company by sales in 2017.

But with a new, old logo, NFL sponsorship and revitalised menu, Pizza Hut hopes to claw back the public’s affection. “We need have more guts in what we’re doing and be more confident,” Radley told The Drum. “I think we shied away from that over the years.”

Expect to see the new identity appear on Pizza Hut’s communications and promotions in the coming weeks.

Related articles: