Growing Herbs @ Home

with Raspberry Pi 4 and balenaCloud

Jeff Gensler
15 min readJun 21, 2020

For the last few years, I have been particularly interested in automation as it relates to producing food. Part of this idea was explored in my previous post about growing mushrooms at home (link to previous post). The goal of this project is to try and track the growth of a plant over time. With this data, I should be able to release data over some of the growing patterns of various types of plants. I am deciding to grow herbs because I’ve grown them before and the growing process shouldn’t be too difficult.

In this guide, I will explore the beginning of the project (mostly setup and design). Given that I will learn things along the way, I’ll try and write most things here as the details won’t be as clean a few weeks.

Working with Raspberry Pi 2

At the start of this project, I decided to buy two Raspberry Pi 4s with Sense shields for various sensors and the LED lights. These took a few days to arrive so I figured I could get familiar with the tooling and try and setup a test project with my Raspberry Pi 2. The short goal was get a Python App deployed to the Raspberry Pi and interacting with Pins in some way.

I have already done some investigation before about deployment patterns and technical choices. I enjoyed trying resin.io a while back (though I think it was renamed at some point to balenaCloud) because it used Docker containers to build and deploy applications. This deployment model makes code dependencies explicit and provides a convenient and well-known packaging format.

I did some searching and found this guide so most of this first part is following this guide with pieces I found interesting.

Creating the Application

If you have a GitHub account, you’ll be able to login to balenaCloud. From there, you can create a new application and select the default device type. I did a bit of searching but couldn’t find a way to change the default device later. Probably not a big deal but something to note to save a few clicks later.

Installing the OS

I quickly realized that I didn’t have a SD card read on both work and personal laptops nor my desktop as well. I did happen to have an Acer c720 laying around but this was running ChromeOS and I was a bit worried I wouldn’t be able to reforat the SD card. The first thing I tried was reformatting the drive:

mkfs.vfat -T 32 /dev/sdb

The only problem I faced here was ChromeOS not recognizing the SD card at all (no /dev/sdb). To be honest, I just plugged in back in a few times and eventually it showed up.

After that was reformatted, I downloaded the OS from balenaCloud (develop mode checked) and uploaded to the SD card. I am a bit hazy on the process here, but I think the OS partitions the drive after it is installed. To install the OS again, I looked for another solution. Typically, you would use Etcher but this is not available on Chromebooks. However, Chromebooks have a Recovery Utility to create ChromeOS backups and installation media. We can actually use this to install BalenaOS to the SD card.

Verify Device is Working

After installing and connecting your Raspberry Pi to the network using a wired connection, you’ll see it show up in the UI.

In this case, I was using a Raspberry Pi 2 that can only use wired connections (not sure about using USB Wifi device). There isn’t much that can break in the wired setup so startup and connection to balenaCloud should take <2 minutes. For Raspberry Pi 4 using WiFi, you can check your Routers web interface for devices connected. If you have many devices connected to your Wifi, it may be difficult to differentiate the devices based on hostname (some may not have one!). MAC addresses typically can be mapped to organizations using the first three octets. Here is the range for Raspberry Pis.

Writing Some Code (Hello World)

After deploying and connecting you device, it is time to deploy some code. The first step in this process is choosing a base image. Looking through examples, there are many styles and types of images you can use. Some are blank operating systems and some are specific per language. You can check out most of them in this repository:

I ended up choosing the following image:

FROM balenalib/%%BALENA_MACHINE_NAME%%-python:3-stretch-build

make sure to name the file Dockerfile.template to let the variables (rpi2) update correctly. If you name it Dockerfile , you’ll end up with syntax errors that don’t tell you that you are using template variables but have misnamed the file. I believe this template mechanism is specific to balenaCloud and was new to me when figuring out how to get this working.

Also, make sure your ssh keys up to date ( ssh-keygen (if needed) and upload to GitHub). I haven’t been uploading code to GitHub at all so I didn’t have any public/private keypair for my desktop machine. BalenaCloud can import ssh-keys from GitHub so you can add it there and then import in the balenaCloud UI later.

Copy the rest of files (requirements.txt, main.py) to your repo and commit to the balenaCloud repository.

git add --all
git commit -m "init"
git push balena master

Builds on balenaCloud used cached images so the design of you Dockerfile matters. The infrastructure is responsive and your build should complete in a minute or two.

After the contain is running on your Raspberry Pi, you can access the HTTP server. If you are on the same network as the Pi, you can access by its private/network IP address (shown below). There is also a feature to generate a public URL (basically a VPN similar to ngrok).

Hello World from Flask Application

Aside: Installing the balenaCloud CLI

I think the balena CLI is required for “local” development mode (the “development” check box when downloading and installing the OS). I wasn’t able to get this working but here were the steps I followed. I use Chocolately to install packages like this:

choco install balena-cli

After installing, you’ll need to login. This is interesting because its auth flow redirects to localhost where it is able to use the token supplied in the query string to identify as you. Usually, redirects to localhost are for development but this is one case where using it in production makes sense!

balena.exe login

After this, you can run various CRUD commands to view applications, build, and devices.

Interacting with the Pins

Given that the Hello World for Arduino is the blinking light, I figured I should do something similar for my Raspberry Pi. I found several Python libraries along the way that can interact with pins. I ended up using the following:

After, we can update our Hello World function to blink the light:

@app.route('/')
def hello_world():
led.on()
sleep(1)
led.off()
return 'Hello World!'

After committing and pushing, you should be able to see the LED blink!

off
led light is on
on!

With this example working, it should mean we can use balenaCloud as a deployment tool and not have to configure much else to allow access to the pins.

Working with Raspberry PI 4

At this point, my Raspberry Pi 4s came in and I started to write code that is more specific to my application. I am interested in using LEDs to try and grow plants. Specifically, I bought the Sense Hat as I figured it was be the quickest way to get started and figure out if it would or wouldn’t work for my use case.

I pretty much followed this whole guide again for my Raspberry Pi 4s (create a new application, upload a new hello world, etc). I did have some issues when trying to install the libraries required for the Sense hat (not found in apt repos for both Debian and Ubuntu). I also tried to get the Python package for the Sense Hat working but this also failed as I tried installing dependencies and ended up with both a gigantic container and code that still didn’t work.

After a bit of thought, I realized that there is another distribution with apt repos: Raspbian. After switching my base image to that, I was able to find and install the sense-hat package.

After this, I started writing my Python code. I opted for learning more about the asyncio package.

class Callbacker(object):
def __init__(self, loop: asyncio.AbstractEventLoop, f: Callable[[datetime.datetime], datetime.datetime]):
self.logger = logging.getLogger('callbacker')
self.loop = loop
self.f = f
def call_backer(self):
now = datetime.datetime.now()
self.logger.debug("cb: before {}".format(now.timestamp()))
callback_time = self.f(now=now)
self.logger.debug("cb: scheduling {}".format(
callback_time.timestamp()))
loop_callback_time = self.loop.time() +
(callback_time.timestamp() - now.timestamp())
self.loop.call_at(loop_callback_time, self.call_backer)
def start(self):
self.loop.call_soon(self.call_backer)

I wrote a helper class to call a function (f). This function should return when it should be called next. The call_backer will then schedule this next function on the event loop. The interesting thing here is the the event loop’s time() function returns a monotonically increasing number so we need to schedule tasks relative to that number.

I generally wanted to avoid having code using sleep so that task/time management is completely located in the EventLoop code base. While just a guess, I figured this code would be more efficient in terms of CPU and scheduling as it is located in Python itself (which likely has access to OS primitives and optimizations (epoll ?)).

The above code worked and I was able to write a helper class+function for deciding if the lights should be on or off (I am guessing that my plants should have ~12 hours of light)

def manage_lights(self, now: datetime.datetime)
-> datetime.datetime:

if now.hour > self.start_hour or now.hour < self.end_hour:
self.hat.show_message("YYY")
self.hat.set_pixels([WHITE_PIXEL for i in range(64)])
else:
self.hat.show_message("NNN")
self.hat.clear()
return now + datetime.timedelta(minutes=1)

In the above code, I am calling this function every minute. When the time is right, I can be more aggressive with how long to wait until the next function call (hours instead of days) or do some more math and make sure the function is called at 9:01 PM in the event it was restarted at 8:50 PM. This is why I chose to return a datetime.datetime instead of a datetime.timedelta. It should help the developer realize that the thing they return will (hopefully) be the next argument to the function call so they should realize that the only dates they care about are decision trees in their code.

Aside: Dealing with power

When trying to test the shields, I was initially confused at why one would start and one would not. It turns out that this is related to power consumption and having the right type of charger. The “5 volt 1 amp” Chromecast charger was not enough to power both the board and the shield. The second charger (5.2 volt 2 amp) did end up working. After finding that out, I decided to buy another charger that had an even high amperage: 5 volt 3 amp.

5 volt 1 amp, 5.2 volt 2 amp, 5 volt 3 amp

Aside: Sending Device Data to Google Cloud

At this point, I have a decent framework for scheduling tasks with varying frequencies (the decision to turn lights on/off is different than the decision to upload humidity data). I can just create a new callback for every feature I want to add and schedule them on the same EventLoop.

I did some investigation at using Google Cloud to store metric data for the various sensors included with the Sense Hat. I took a look at the architecture that Google Cloud IoT would give me and I think it is a bit complex given that I am using balenaCloud for most of the “devices management” and won’t need state in two places. To get something small working, I’ll probably just generate an API Key and upload directly to StackDriver using the Python APIs (guide). Putting this data in StackDriver will let me alert when the various conditions in my growing environment are unfavorable and require attention (like p95 humidity is not correct, possibly relating to hardware failure).

I would imagine that I also want this data for historical purposes and will also want it stored in a database for growth prediction models. I am not sure of the best design to upload these data points to two places. This might be where the IoT topic queue pattern comes in as I likely don’t want to bloat my devices with too much code (especially transaction logic).

Sense Hat: will it work?

I have code that will turn the lights on and off. Will this be enough light to grow plants? I didn’t really do much planning on the light being used and I figured I would need to buy the Sense Hat and conduct some tests to gain more intuition about what I need to grow plants.

To test the amount of light produced from the Sense Hat, I created a measuring device consisting of the NodeMCU and the LM393 photoresistor expansion board. You can use this guide to get your Arduio Environment setup. After getting a simple server working, you can have the page return an analog read from the light sensor.

The first step was to measure the light value at various points in the day. I am not concerned with the value itself but interested in the relative value generated by the sense shield.

The only thing to note on this graph is that 4:50 PM (16:50) had direct sunlight. This is likely the target value that I would like to reach inside my tent. I would want to sustain that value close to 12 hours to provide plenty of light to these plants.

Next, I need to measure light inside my growing area. I am trying to use a growing tent so that I can (hopefully) create a controllable environment and also try and supply more light than my apartment (currently only getting <2 hours of direct sunlight per day)

Extremely Professional Tent Setup

The “2ft” column is when I placed the light sensor setup on a cardboard box that is 18 inches. It took up around half the grow tent so I just called it “2ft” for simplicity. A few things to note:

Basically, this data is showing me that the Sense hat will not provide any light at all (even though its light it bright to the human eye). I am not sure there is some brightness setting that I have missed but I would imagine this Hat will never outperform the LED flashlight.

I have purchased two more fixtures that I will try later: 256 LED layout and a 300 LED waterproof light strip. I am guessing that these will perform similar to the flashlight so I am mainly interested in if I need to build a larger array of lights separate from the Raspberry Pis.

Measuring the Plants

While more light experiments are underway, I figured I would take a detour and focus on how to measure plant growth. I believe this part of the project will provide the most value and will be critical in measuring the grow conditions.

There are a few options I have when thinking about measuring plant growth:

  • Lidar: expensive and not useful as only depth data (no color)
  • Sonic: very cheap sensors, probably lots of manual data assembly
  • Pictures (Photogrammetry): data valuable in multiple ways, large storage cost, approximation based on math

After weighing the programming time with cost to get started, I decided to do some investigation using cameras. I knew that building stereo vision using two cameras is possible as I had done some research a while back on ROS as a framework for building these sorts of applications (see depth_image_proc for example). After sensing the environment, you also need to apply locomotion to build a more comprehensive “map” of the environment (see SLAM).

Given the limited space in the grow area, I wondered if using one camera would be possible. I ended up finding the following guide which shows how to generate a 3D model using only pictures from one camera (and no positional data)!

To download COLMAP, take a peek at the GitHub Releases. After downloading, you’ll need a take a bunch of pictures. I initially tried the guide using only 10 pictures, this was not enough to create a model so I went back and took ~60 pictures as I saw their example datasets had closer to 100. I am using my Google Pixel (first version) and images are around 4Mb.

Using COLMAP start with a three step process. This process generates a “sparse” model which can be saved and loaded later. This process consists of three actions taken int the UI.

  1. Feature Extraction
  2. Feature Matching
  3. Build model (sparse model)
original pictures and sparse model

After you generate and save the sparse model, you can then create the dense model. To generate the dense model, follow the following steps:

  1. Select a sparse model in the dropdown (I chose “Model 2”) and click “Dense reconstruction”
  2. Select an output directory for the dense model (this UX is a bit clunky and it ends up copying all of your pictures referenced in the sparse model)
  3. Click Undistortion (pretty fast)
  4. Click Stereo (took ~2 hours, though it references each picture twice so I think I misconfigured something. I think it would take closer to 1 hour for the 60 photos I took)
  5. Click Fusion (shown below, took ~10 minutes)
dense model (~2 hours because of misconfiguration that duplicated files)

After the Fusion step, you’ll be left with two .ply models in your project directory.

The guide above mentioned using MeshLab, so I installed that to see what I could do. One thing I noticed is that MeshLab doesn’t understand how big/small things are. This makes sense because pictures themselves are 2d and don’t have scale. To determine scale, we need a reference point in our image. After, we need to apply a scaling factor to the mesh and the other objects in the mesh should be the “right” size. See the following video for the idea behind scaling.

Lets just say I want to use the box as a reference point, In our mesh, we can find one side of the box is 6.1 units long. In real life, we know the box is 12 inches, so we can calculate the scale factor:12 / 6.1 = 1.967 .

before scaling: side of box is 6.1; after scaling, USB charger is 1.75 which is pretty close to real life!

This comes remarkably close to real life object. This seems like a decent margin of error and suitable for our project. The main problem with the above guide will be automation (especially since this runs on Windows). Fortunately, COLMAP does have a command line interface.

All in all, this strategy could be used to replace and stereo vision + ROS software. All we will need is to take 100+ pictures of our plants from various angles and include a reference object.

What about color?

We’ve investigated size of an object (which happens to include color). Is there a similar way to “scale” the colors? From my investigation, there are things called “Color Targets” that can be used for calibration. If the amount of light is constant, I wonder if we can include one of these color targets as a similar reference point so that we can scale the colors between photo attempts. I don’t imagine we absolutely need this data at the beginning, but it is nice to realize that we might want this after several grow cycles.

Finding the Plant

In the above mesh construction guide, we didn’t cover how to “split” the mesh into a box and a USB charger. I will do some research here but my understanding is that this is a much more complex topic than just construction. If we were only using 2d images, we might try and use image segmentation. Is it possible that an image segmentation algorithm exists for 3d models? What would it take for us to build our own? Just like I had to manually label data in my MushroomBot, I wonder if I can train a classifier to identity plant regions in a mesh.

Next Steps

Overall, the above experiment shows that I plenty of work left in light selection as well as plant measurement. I’ll post again once I make some more progress on the project.

--

--