The Firebase Summit 2017 took place in Amsterdam
Francesco, Rui and Luis were there and they've shared with us the talks, tips & tricks they found more interesting about that great event

Extending Firebase to the web

James Daniels, Erik Haddad

James & Erik describe the scenario of two frontend developers that want to build an app - onSnapshot - to aggregate the most interesting news about Firebase. However they have to overcome not having any backend developers on their team...

With a small set of requirements, that include the ability to store articles and comments, a live visitor count on each article and caching, we’re shown how some of what the Firebase ecosystem has to offer makes it possible to fulfill these requirements with great ease.

Code-wise, using AngularFire makes it easy to interact with all the Firebase services being used.

Storing articles and comments is achieved through Firebase’s new database solution: Firestore. A JSON document store that allows us to store collections of key-value pairs (JSON documents) which can contain collections in themselves as well - perfect for managing articles and comments. Just like Firebase’s first database offer, it’s real time but it bundles new features like a more powerful query system and shallow query results.

onSnaphot's Firestore structure to store articles and the respective comments.

The auth system is provided by Firebase as well, having the app wrap up what AngularFire offers by default, like the anonymous login and Google login. Having every user automatically sign in, anonymously, makes it easy to apply server-side rules (for the database) and it allows for an easy transition from a user being signed-in anonymously to when the user signs in with their account.

Another key take-away is the way they leverage the way RTDB works with connections, in order to build an article view count. Whenever a user is viewing an article - signed-in anonymously or through Google - a reference is stored in RTDB, mapping the article id to that user’s id: /articleVisitors/articleId/userId. An onDisconnect() operation is then set on this reference so that when it’s triggered (server-side) it auto-removes itself. This covers a few scenarios: browser crashes, users closes the browser, etc. However, if the user just navigates away, RTDB still maintains a connection, so it’s necessary to remove the reference when this particular Article view is destroyed. To finally get the view count, Cloud Functions are used to monitor changes on these articleVisitors/articleId/userId references. When one occurs, the view count - stored in /articleViewCount/articleId - is updated.

Finally, Firebase also offers hosting, making it easy to serve your web app through a powerful CDN.

Make sure you check the full talk here.

Actionable insights with Firebase Predictions

Jumana Al Hashal, Subir Jhanb

Jumana and Jhanb presented Firebase Predictions, a machine learning powered feature that analyses past user behavior to try to predict the future one.
For example, it can predict when users will churn or not spend in your app, so you can take action and improve your retention or app revenue

How does this work?

A training data is fed into a neural network structure using TensorFlow. Then the model is asked to take the entire historic period of events and generate predictions for the upcoming 7 days. These predictions are "labels" which get assigned to users: will-churn, will-spend, will-not-churn, will-share, etc.

In the meantime, you set up rules in your Remote Config (or push notifications) console using those labels.

Let's say you want to offer a discount price to users that will possibly churn.
In that case you'll set up a Remote Config condition saying:
when <user has label will-churn> then <show discounted price> is true

In this way, when the Firebase Predictions algorithm flags users as will-churn they will automatically get into that bucket and the Remote Config value will change for them, revealing the feature you want to show.

To know more about Firebase Predictions check out this very detailed post by Joe Birch

A/B Testing and More with Firebase

Laurence Moroney, Arda Atali

The new Firebase A/B Testing feature will let to run proper A/B test experiments, allowing you to define specific audiences for your experiments, multiple variants, targeting percentages and so on. The goal of your experiment must, of course, be selected; this is what you are aiming at improving with your changes. You can select one primary goal and multiple secondary ones, even if you should keep your experiments as focused as possible in order to avoid bias in users’ flow and in your decisions.

Results are very easy to analyse, in a simple table providing goal metrics for every variant, including the control group. The Firebase console will also highlight what the best variant for the primary goal is, so you don’t even have to necessarily look at the numbers and percentages: Firebase does everything for you.

You can also start a notification-specific experiment, without writing a single line of code: just access the Firebase Console, and start an experiment from the Notifications panel. Your user groups will then receive different notifications according to the variant they fall in: goals and outcomes work as just the same.

Introducing Cloud Firestore

Jonny Dimond, Sarah Allen, Alex Dufetel

Cloud Firestore is the latest database offering from Firebase. You can see it as a spiritual successor to the Real Time Database. Like its predecessor it bundles together useful features like offline capability and real time updates. However, unlike RTDB it offers much more powerful querying and data structuring capabilities.

Example structure showcasing a structure with collections (“Articles”), documents (“Article”) and sub-collections (“comments).

RTDB makes querying hard since it’s not possible to perform queries over more than one property, due to its underlying data structure. Unless you structure your data in advance, with all the querying you’ll be doing in mind, you’re forced to go through most of it in order to find what you’re looking for. At the same time, RTDB’s scaling capabilities will hit a roof at around 100k simultaneous connections, forcing you to partition your database through other (new) projects.

Both these issues are solved in Firestore: querying is much more powerful and scalability is, according to Google itself, already better than what RTDB offers and will eventually reach a point where it shouldn’t be a concern - i.e. you won’t have to worry about sharding at all as resources should scale appropriately when necessary.

Focusing on querying, RTDB would only allow you to sort and filter data based on one of the following: value of one of the child’s properties, the key of children or the value of children. You’d then have to work with filters to get the query results you were looking for.

Improving on this, Firestore offers compound querying, enabling you to chain multiple queries in one go:

    .where(“planet”, “==”, “earth”)
    .where(“scouterLevel”, “>”, 9000)

Be aware that such querying requires you to define indexes through Firestore’s console. If you fail to do so, the official SDK will let you know! Another important detail to consider is that query results are shallow by default, which means that when you retrieve a document you won’t be getting all of the sub-collections that document might contain. This is in contrast to the way RTDB works.

In order to perform data validation, RTDB would require you to write some validation rules. Let’s say you want to validate a lat-long value that’s part of your data structure. You’d need to write a special rule for this. With Firestore, as it supports a few rich data types this is automatically done. Regarding security rules, an important thing to keep in mind if you transition from RTDB do Firestore is that this type of rules do not cascade by default on Firestore, when they do on RTDB.

If you’re familiar with Google’s Cloud Datastore you’re probably spotting a few similarities. This happens because Firestore essentially relies on the same technology and infrastructure that’s behind Datastore.

If you’re looking for more info be sure to check the official announcement, over at The Firebase Blog.

Write production quality Cloud Functions code

Thomas Bouldin, Lauren Long

Cloud Functions, one of the most popular additions to Firebase, can be triggered by multiple types of events: users authenticating on your platform, events on the Realtime Database or on the newer Firestore, specific Analytics tracking events, storage uploads, even Crashlytics events. More commonly, though, you would trigger Functions through a simple HTTPS invocation.
Integrating Cloud Functions with the Hosting feature of Firebase, you can also render dynamic contents, no longer only static Web pages!

Using on the Firebase SDK for NodeJS, you can easily write Cloud Functions leveraging NodeJS 8 native support for promises, through async/await keywords.
The Firebase SDK also exposes Typescript interfaces that help you write code more efficiently and less error-prone, given its “compile”-time safety.

The biggest takeway of this talk was Local Server, a tool for Firebase Cloud Functions that allows you to run those functions on your machine, with no configuration change at all! Local Server also lets you serve hosting pages, so that you can test your Website entirely on your machine, allowing for easier integration and end-to-end tests.
And if you want to test a generic Cloud Function that, for example, listens to Firestore or Analytics events, you can start the Local Shell, a fully-fledged Node shell, that loads your functions into the environment, so you can call them with any sample data you want.

BigQuery for Analytics

Todd Kerpelman

BigQuery is super powerful, but it's hard to use.
The main mistake people make is assuming the analytics data exported from Firebase into BigQuery is a flat row per event and this is not the case.

Every row is a full json object, that contains properties with simple values: strings, integers, etc; but also others with more complex data, like arrays.

In this talk, Todd showed us a lot of useful tips and tricks for real-life use cases of BigQuery.

The main takeaway was the UNNEST function. With UNNEST you can unwrap an array creating a new row for each element of the array. This is very useful to query events and user properties.
For example, say you have an event with multiple parameters and want to filter by only one of them.

In this case, if we want to select only those analysis_completed events with type equals to cache our first attempt will be to do something like:

WHERE = "analysis_completed"

But this returns an error. The issue is that event_dim is not a single object, is an array of objects. And, while some of those objects will have a property called name that will be equals to analysis_completed, event_dim itself doesn’t.

That’s why we first need to unwrap the array into individual rows (one per item of the array) before we can query it.

The full query we wanted to do will be something like:

  UNNEST(event_dim) AS event,
  UNNEST(event.params) AS param
WHERE = "analysis_completed"
  AND param.key = "type"
  AND param.value.string_value = "cache"

Read more about UNNEST in this great blog post:

Overall the Firebase Summit was a great event, and we cannot wait to put to use some of the things we have learnt. Bring on the next summit.