For the third and final part in this series, I’ll focus on discussing the progress the LASS project made up until December 2013, which was the mark of the 75% point of our project. In particular, some of the milestones we acheived were defining the use-cases for our future software / prototype development, discussed which sensors we would eventually use, and likewise identified some of the potential risks we might face as we moved forward in the project.
Technical Deliverables and Progress Report
Unlike previous milestones throughout our project, the technical deliverables report provided more of a progress summary of work done until the point it was submitted. The majority of the first four months of development was planning and research into the methods required to make the LASS project come to fruition. In particular, with the definition of the Use Cases and potential risks, we set out the framework which helped direct how we intended to develop the rest of the project. The two months after focused largely on implementing technical detail that can be better described in other posts.
With regards to engineering and software development, use cases are a way of defining the input and output relationships between actors and elements within a system. Typically, use-cases are written in the form of tables, with one table per each use case. However, as the number of use-cases for our project is greater than just a few, we felt that it would be best to leave the full use-case descriptions in the original technical deliverables report, as opposed to copying them here. This is mostly because of the quantity of space it would take to present them would crowd out the rest of the discussion for this article. Alternatively, if you don’t want to read the full report, but would still like to view the fleshed-out use-cases, you can always check the original wiki page located here.
By this part of the project the group had split up in order to tackle the two major components of the project: Developing the sensor prototype, and developing the corresponding website. Working in separate parts, our group came up with two sets of use cases that we would define so as to interact with the overall model of the proposed system, shown below:
From this, we had the prototype use-cases focus on the following use cases, which would ultimately upload data to the OGC SensorThings API (also known as OGC IoT database above).
To briefly explain, we needed to create a specific set of functions and classes that would be able to take interactions from our actors (the Raspberry Pi and the sensors), and subsequently format that data appropriately and upload using the SensorThings API. The specifics as to how we implemented these use cases can be found in other sections, so I will spare you the details here.
The use cases for the website were similar in how we modeled them; however, the actors and their interactions were quite different and in some ways more extensive. Thus we came up with the following use-case diagram below:
The primary interactions in the diagram above are our actors (users of the service) interacting with elements on the website. In particular, we specified that our system should have some kind of authentication functionality, as seen inside the boxed area on the left of the diagram. This would allow users to authenticate themselves for their store, and view observations for their store in real time. If users were promoted to “system configurator” status, then they would be allowed to create their own map of their store, as well as assign specific actors from the prototype diagram to locations within their store.
It is important to note the separation of both sides of development, in that the only real dependency between the two sides was the OGC IoT database. This is what allowed our team to split our efforts and have more effective concurrent development. This flexibility helped greatly with developing our final application, as we could decouple development of each of our components from one another. In plain English, this effectively meant that we didn’t need either side to be finished or working in order to test or debug the other half of the project. Because the SensorThings API allowed us to easily read and update the data on the server, we would be able to push development forward even if we had to simulate data.
While the technical deliverables report had initially stated that we had determined which sensors we would use and how we would use them, we later found that some of our original ideas would not work so well in practice, particularly due to interference and cost. So while the discussion below doesn’t fully match that of our original report, a list of each of the sensors we used for our purposes is described.
Passive or Pseudo-Infrared Sensor (PIR Sensor)
A pseudo-infrared sensor, also referred to as a PIR sensor, is a small, round sensor that measures infrared light radiating from objects within view of the sensor. These types of sensors are often used in motion detectors. A common example of these in use can be seen in the automatic taps and soap dispensers in washrooms. Our aim was to integrate these sensors within a shelf in a store, and track customers throughout the store by mapping the trail of activated motion. The specific brand of PIR sensors we used can be found on sparkfun.com. See the image below:
For more detail: Development Process: Technical Deliverables and Progress Report