Android Things works by creating an Android app and tying it to peripherals using User Drivers. These peripherals come in a variety of forms, this post will explore their differences and commonalities to help you understand where User Driver code fits in your architecture & codebase.

When using peripherals, you will need to create or reuse a Peripheral I/O driver. I highly recommend you take a look at this blog post on creating your own drivers with Android Things and check out this list of ready to use drivers on Github.

Android Things peripherals

Each peripheral you use will have a different Peripheral I/O protocol (or multiple protocols in some cases). The differences in protocols mean your peripheral communication can be one-way (sending actions or receiving data), or two-way with actions and data feedback. Your peripheral might only need one-pin monitoring, or it might monitor multiple pins, which means you can use serial protocols and send and receive larger payloads of data. Once your driver is working, you can use this knowledge to start refactoring and ensuring your code is clean and correctly composed.


Peripherals come in different forms: active peripherals do their own thing once they have power, while passive peripherals wait for something to activate them. There are many hardware examples, like buttons, switches, motion, light or sound detectors, metal detectors, temperature gauges, solar panels, GPS chips, accelerometers, LEDs, screens, speakers, vibrators, lasers, segment displays, Bluetooth WiFi or RFID transmitters, electric motors and many more. These fall into two categories:

  • Input peripherals that consume information from the outside world
  • Output peripherals that produce information for the outside world.

Android Things code structure for peripherals

In your codebase, a simple but naive way to structure your architecture is to group your peripherals by protocol. This would mean having all the GPIO peripherals in one place and all the I2C peripherals in another. It would allow you to quickly find the code for each peripheral once you know the driver protocol. This is handy for fixing all GPIO peripherals at once, but do you ever need to do that? It’s pretty rare. This is reminiscent to architecting by class type, when you have all your Activities in one place, all your Services in another, Views somewhere else and so on.

This type of grouping is “tidy” but it comes with other disadvantages. For example, if you added a speaker peripheral and then later wanted to update it, you may not recall that it uses I2C. This protocol driven structure would make it hard to look that up. However you will know the speaker makes beeping noises and is turned on in reaction to some user input. Let’s look at another way to group peripherals.

Structuring your architecture to group domain peripherals by input and output is an efficient way to understand when and where they are used.[1] The speaker we described above or an LED would both be output devices, meaning they will feedback to the user. A button would be an input device, as the user presses it to give information to the system. This means you only have to search for input or output when looking for the relevant block of code for this peripheral. This is a useful step towards a cleaner architecture.

Naming Android Things peripherals

An input peripheral can also be called a sensor. A sensor detects or measures a physical property and responds to it. This means a sensor measures a particular element in the outside world and feeds its knowledge back into the system. This sounds quite like a description of a temperature gauge, a motion detector or a button.

A device which detects or measures a physical property and records, indicates, or otherwise responds to it.

sensor [2]

Another name for an output peripheral is an actuator. An actuator is responsible for moving or controlling a mechanism or system, or presenting data to the outside world. This means an actuator moves or acts to change the outside world with knowledge we’ve shared from the system. This sounds quite like a description of a speaker, LED screen or electric motor.[3]

A component of a machine that is responsible for moving or controlling a mechanism or system.

actuator [4]

Model View Presenter with Android Things peripherals

What’s the best way to use this clean code and architecture theory in practice? Let's take our Android Things peripherals with sensor and actuator drivers and attempt to use them in an MVP UI architecture. Remember this is my MVP, there are many ways to implement MVP but this one is mine.[5]

Let’s start with an example. You have an application using a PIR motion detector and an LED. When the motion detector senses enough movement the LED will turn on. The LED would be an actuator and the motion detector a sensor. From an MVP point of view your LED is part of the View as it is the visible output of the app and the motion detector is part of our Model as it is involved in the logic for movement, especially the rule “enough movement”. The presenter mediates between the two and isn’t important in this context.

Following this train of thought the naming of your peripherals can follow the same pattern. You would have PirMovementSensor and MotionLedActuator. This improves the consistency of your code and makes future peripheral lookups easier[6].

package example

The domain is clearly expressed in the above application. You know where to look for each class and the pattern that you would use for adding any new features. The peripherals used are clearly identifiable and separated based on behaviour so you could easily find them.

As an example the MotionLedActuator would sit in the View and be controlled by the Presenter. The Presenter knows when the app is starting (onCreate) or when the LED needs to be turned on (from the Model). The actuator is very dumb and its interface exposes a lot of methods for the Presenter to control it:

public class MotionLedActuator implements LedActuator {

   private final Gpio bus;

   public MotionLedActuator(Gpio bus) {
       this.bus = bus;

   public void startup() {
       try {
       } catch (IOException e) {
           Log.e("blog", "Failed to startup.", e);

   public void turnOn() {
       try {
       } catch (IOException e) {
           Log.e("blog", "Failed to turn on.", e);

   public void turnOff() {
       try {
       } catch (IOException e) {
           Log.e("blog", "Failed to turn off.", e);

   public void shutdown() {
       try {
       } catch (IOException e) {
           Log.e("blog", "Failed to shut down.", e);


The sensor and actuator interfaces allow for testability and replacement of peripherals with either alternative hardware or mock implementations. Read more in this blog post on testing Android Things.


Understanding the problem space, including Peripheral I/O and protocols, is the first step to a more indepth insight into your architecture and creating cleaner code. This can help you identify which hardware peripherals are available and how you can split these up into two groups: sensors for input, actuators for output.

Structured naming conventions can increase clarity when you’re reading code and searching for classes. Also helping communication within your team because everyone understands what an actuator/sensor is, everyone is on the same page during discussions.

Having an architectural separation between components allows you to fit Android Things peripherals into the MVP UI design pattern, leading to a cleaner codebase.

The key naming takeaway:

Name your input peripherals *

Name your output peripherals *

  1. This assumes that you already structure your codebase by domain and describes the substructure that comes below that in your architecture. DDD ↩︎

  2. ↩︎

  3. If you’re wondering about the Raspberry Pi Rainbow Hat, this is a combination of multiple sensors and actuators. In your architecture your individual objects should not know of the hat itself, just of the bits it is made up of - the sensors and actuators. ↩︎

  4. ↩︎

  5.'s_Creed ↩︎

  6. For example, in Android Studio on Mac, press CMD + O, then type *Sensor to see all sensor peripherals. ↩︎