As every year, a selection of Novoda team members headed to Shoreline Amphitheatre last week for Google I/O 2018. They attended sessions, talked to Googlers about our clients' problems and needs, and lived and breathed all that's new in the many Google products we use daily. Here's what they found exciting.
Google has been very busy this year cooking up a great Google I/O. The massive refresh to Material Design, all the new AI-powered features in Assistant and Maps, Android P and its many new smarts, are just some of the headline announcements that the various teams dished out during the keynotes and the various sessions. What are the Novodans that attended the conference particularly excited about?
There are so many things to get excited about that I really don't know where to start. Both designers and developers have something to rejoice about. I think Material Theming is a huge step forward in the maturity and versatility of Material Design; the preferred design system for Android and Google products is now a design system for design systems. I'm looking forward to seeing what people can come up with this new found flexibility and freedom, and all the tools that were announced.
When it comes to Material Design, there's even more exciting news: Flutter, the extremely promising cross-platform app framework, is now an official Material Design implementation. Just check how many of the design talks at I/O prominently feature Flutter! And that's not all. The Flutter areas at Shoreline have been constantly full of people, and the codelabs have consistently been amongst the most used by the I/O crowd. There's a lot of buzz around this piece of technology and I'm amazed at how fast things are moving since Eugenio Marletti and I did the first public talk on the subject last year!
But I'm still an Android developer at heart, and I'm also very glad that Android P is not as minor of a release as the first developer preview might have suggested. Besides all the new Material goodness, which makes P a Pleasure to use (pun intended!), there's a vast amount of new APIs for developers to play with. Obviously, my favourite is the Slices API, together with the App Actions.
Lastly, I really want to do a shout out to the digital wellness features Google is introducing in Android P. I'm very happy to see them being attentive to the risks of addiction to our phones every one of us is facing daily.
I’m really pleased that accessibility hasn’t fallen by the wayside in the recent drive with AR and VR. Christopher Patnoe and Ran Tao opened on Day One with the aptly-named session, Accessibility for AR and VR, introducing both AR/VR as well as accessibility concepts to a packed tent of attendees. What I found interesting (and reassuring!) was the familiar advice given to designers to build inclusive experiences.
I had a quick peek around at the Design & Accessibility sandbox too, where the Lookout team were showcasing a new app for vision impaired users. The app combines computer vision with some custom gestures to facilitate hands-free usage and it’ll be released pretty soon for you to try yourselves.
Not everything was super-flashy. As part of the Jetpack library of… libraries, the Android Support Library has been updated to AndroidX, basically a repackaging of the existing tools with some updates. The intention here is to have separate versioning for each artefact guaranteeing binary compatibility through major versions and also renaming artefacts so it’s clear what’s inside each one. I think I’ll need a diagram of all the new libraries and plugins with “X” in it before I feel comfortable, though.
Since its launch, Google Photos allowed users to easily store, search and share an unlimited amount of pictures, which couldn't be however accessed by third-party apps.
As a passionate photographer, I’m really excited about the newly announced Photos API. While still in developer preview, this API allows developers to show users' pictures from Google Photos directly in an app. Even more interesting is that it will be possible to programmatically upload new pictures and create albums with rich information, including labels, maps and location.
At I/O this year, there were over 15 hours of sessions relating to Firebase. Since Google’s acquisition of Fabric last year, I’ve been particularly interested in the progression of Firebase as a platform as it brings on more of Fabric’s suite.
A particularly exciting announcement as an iOS developer was the availability of ‘Test Lab’ on iOS devices. Test Lab allows you to remotely test your application on a range of devices, which is really important to remove the chances of device-specific bugs, or UI issues. With this now being available on iOS, projects with both platforms can reliably know the state of their apps, and more cross-platform team collaboration can be...
Additionally, on the side of Analytics, the Firebase console has been adjusted so you can now see the total statistics across both mobile platforms together if required. This mitigates the common segregation of the platform implementations and allows for a clearer picture of the overall state of your app.
Even more excitingly, MLKit is now available as an integration with Firebase, giving access to a set of APIs, allowing your app to recognise text, detect faces, scan barcodes, label images and recognise landmarks. You can run these either locally or in the cloud depending on your requirements.
The expansion of features available on Firebase is really exciting as an app developer, especially when working across both platforms. Having such a plethora of tools available so easily is a fantastic resource, and I’m really excited to see the developments over the next 12 months.
This year’s I/O had something for everyone to enjoy. The conference featured lots of technical talks, as well as product and design ones and even inspirational ones.
The one thing that I am excited the most has to be Android’s new layout builder, the
ConstraintLayout. Not only it allows the developer to create those nearly impossible to implement designs, but it gives so much flexibility towards how they can achieve it.
ConstraintLayout now brings a new API called
Helpers which allows you to create custom behaviour for a specific view or a group of views, which can be reused across different designs (composition over inheritance). In addition to all these goodies, the team is also working on a subclass of
MotionLayout. As the name suggests, it provides a flexible way of creating animations across yours screens by specifying keyframes, similar to modern animation software.
The one talk to left me inspired has to be Designing for inclusion where John Maeda went through some bits of the Design in Tech Report 2018 pointing out why inclusion is such an important topic not just for people with special needs, but for you as a designer, a professional, a human being and an ageing person. The same talk featured Hannah Beachler, Black Panther's Production Designer, going through the process behind designing the futuristic world of Wakanda.
Last but not least, Google I/O featured research talks and I cannot express my appreciation enough for that! It was great to receive insights for research done within Google about Material Design and how each component became to be. If you want to learn more, I encourage you watch the Material Metrics talk.
For me, what’s really interesting to see in this year’s I/O is the amount of time and energy Google are investing in Augmented Reality. Google announced two big new features that are going to shape AR applications: Cloud Anchors, and Augmented Images.
Cloud Anchors is a cross-platform feature that will help synchronize group AR activity. Put simply, cloud anchors allow multiple people to be involved and interact with the same Augmented Reality experience simultaneously.
Why is this good? Previously an AR experience, in my opinion, has been a fairly lacklustre solitary experience. It’s has been limited to one person looking at one small phone display. If you compare this solo experience to that of a VR headset then AR is left wanting. VR can create a whole world for an individual and therefore a multitude of exciting experiences. However, this new feature opens up new opportunities in which people can experience AR with their friends across multiple devices. This adds a much needed social element and it opens up a world of opportunity for AR.
Augmented Images are a new feature that allows an application to activate 2D images into 3D images very simply. They work by detecting certain images in real time and then render 3D assets on top of them. Use cases include education and media but the main and most interesting use case is in advertisements. This feature means advertisers will be able to turn stationary adverts around the world into far more engaging experiences.
In conclusion, these new features show that ARCore is being taken very seriously by Google and AR is becoming more and more likely to have widespread adoption in the near future. You can get a more detailed overview of these new features, and others, on the "What’s new in AR" talk.