google i/o

Google I/O 2019 is over, here's what we are excited about

Novoda has a reputation of building the most desirable apps for Android and iOS. We believe living and sharing a hack-and-tell culture is one way to maintain top-shelf quality.

Google I/O 2019 was a blast! You can watch all 177 sessions on Youtube now. We've been talking and discussing all the new shiny stuff since last week and here's a selection of the things we're most excited about.

io-19-animated-logo


Paul Blundell
Head of Engineering

➤ Live Coding A Machine Learning Model from Scratch

This was an excellent session by Sara Robinson using a tool called Colab to create a Tensor Flow ML model from scratch. Having a background in Android development I came to this with little experience but a lot of interest. From my understanding Colab is an online IDE for writing Python, it's backed by Google Cloud and runs the code somewhere (don’t need to know). Sara went through the steps of defining the problem, codifying that problem into python objects, then creating a model using Keras to train on that data and give a response. The actual task was to classify StackOverflow questions and automatically label them with the programming language they represented.

Screenshot-2019-05-14-at-16.17.41

I learnt a great deal about Machine Learning, not so much any of the theory but a lot of practical uses in model creation. Colab looks like a really easy to use tool, and this session is a great reference that I know I will be able to keep coming back to.

This model creation example can be used in Android development to allow us to define and create our own models. We can then use other tools created by Google like ML Kit & Firebase to upload this model to the cloud, and have it sync’d to our devices. Thus allowing for on-device classification. I am currently building my own Model for classifying whether a game being played is going to get good feedback, this talk has given me the building blocks to make that happen.


Tobias Heine
Android Developer

➤ Build a Modular Android App Architecture

This talk by Florina Muntenescu and Yigit Boyar is about choosing the right modularization strategy using Gradle library modules and dynamic feature modules including dynamic code loading.

Modularization helps to scale a codebase by decoupling certain aspects of an application. Furthermore, dynamic features modules help the business to a/b test different implementations of a feature and can reduce the user drop off rate between installation and opening an application by shrinking the APK size. The talk compares the two strategies of splitting an application horizontally into modules by layers (e.g. IO, Domain and UI) or vertically by features (e.g. search and about) including migration strategies from one approach to another and best practices around testing and navigation.

dynamic_features

The part I found most exciting is about dynamic feature modules and dynamic code loading:

Using Gradle library modules we can describe dependencies between modules, where a feature module exposes an API and a consuming module resolves it at compile time.

Then, using dynamic feature modules this dependency can be resolved at runtime, meaning the binary being shipped to the user does not necessarily include all modules and dynamic feature modules can be downloaded on demand.

Furthermore, the dependency has been inverted, so that instead of depending on a feature module, the dynamic feature module needs to depend on the consuming module. This requires the consumer to use reflection in order to resolve the dependency on the provided API. The talk also mentions a github sample repository demonstrating how to load and use classes from dynamic feature modules using either reflection, the ServiceLoader, or Dagger 2.


Daniele Bonaldo
Android, Wearables and IoT GDE

➤ Android Jetpack: Understand the CameraX Camera-Support Library

This talk introduced the new CameraX support library in Jetpack.
If you ever tried to create an Android camera app, or just to interact with the current Camera API you know how hard it can be to configure it and to follow the preview and capture flow. I did that for my Do-it-yourselfie photo booth and the final code was far than clean. That’s one of the reasons that got me excited about the new CameraX API.

camera_x

Based on Camera 2 API, CameraX is backward compatible down to Android L and it allows to instantiate, configure and manage the camera with a more fluent and easy to follow API.
The whole library is built on top of three main use cases:

  • Preview
  • Image analysis
  • Capture

These use cases are bound to the activity lifecycle, so that Camera X is lifecycle aware and it’s not needed to worry about starting and stopping the camera anymore.

Camera X is consistent across different devices and it supports what are called Extensions.
These are device-specific capabilities (like portrait, night mode, HDR, etc) and with the new library it’s incredibly easy to check if the extension is available and allow the user to enable it


Luis G. Valle
Mobile Principal Engineer

➤ What's new in Architecture Components?

Architecture Components have grown up a lot since they were first presented in Google I/O 2016.
According to Google internal surveys, they are used now by more than 70% of all the professional developers out there.
This year, Google announced that Architecture Components are going to be Kotlin first from now on. This means they are going to be written in Kotlin and their APIs designed for Kotlin. Java will be supported too.

These are two topics caught my attention from this session:

DataBinding will have better tooling support. Now the binding class will be generated live, without waiting for compilation. That means, when you change an ID in your layout, it will change immediately in your Kotlin code.
There’s going to be support for refactoring, so when you change the name of a function in your viewModel, it will change automatically in your layout.
And finally, error messages are going to be improved as well, so no more mysterious “binding fail” errors.

Talking about ViewModels, there’s going to be a way to access the activity saved state in the viewModel. Now, we’ll be able to get and put stuff in the saved state, so if the app process is completely killed, we’ll be able to restore everything to the way it was before.


What are the things that you liked the most? Let us know on Twitter, we're excited to chat about it!

Enjoyed this article? There's more...

We send out a small, valuable newsletter with the best stories, app design & development resources every month.

No spam, no giving your data away, unsubscribe anytime.

About Novoda

We plan, design, and develop the world’s most desirable software products. Our team’s expertise helps brands like Sony, Motorola, Tesco, Channel4, BBC, and News Corp build fully customized Android devices or simply make their mobile experiences the best on the market. Since 2008, our full in-house teams work from London, Liverpool, Berlin, Barcelona, and NYC.

Let’s get in contact