Back to Parent

Outcome


Project Development

Robotic dogs already exist and make popular presents. Our team imagined what would happen if these dogs were integrated into the Internet of Things and became not only companions, but part of security systems. This is a logical extension of current functionality given that many people get dogs to help keep them safe. However, a robotic dog might have the same struggle that a normal one does: how to differentiate friend from foe and might be a lot harder to subdue or deactivate, if the owner is not capable of doing so for any reason. With this in mind we developed a scenario where a good technology might make the wrong decision:

Show Advanced Options

As can be seen in the real world, the technology is trusted. Part of why it is trusted is that it provides repeated and dependable behavior. Even if the technology had a way of recognizing who is a "good" person that list either has to be populated by the owner or determined in some other way. The core question is, does technology as it stand have the capacity to judge correctly every time? This is something that designers have to take into account when developing a system because either the system has to be sure or the response has to be moderated. If not, a single misjudgment becomes societal distrust of a whole set of technologies, just like when a dog attacks a child the whole breed comes under suspicion. 

Biscuit
Show Advanced Options

Recommendations for IoT designers:

1. Take into account abnormal situations and test to see if the object is used in unexpected ways (eg. an owner activating security mode during the day when friend might visit). 

2. Moderate responses and limit what can be done automatically when it impacts people other than the owner directly. 

3. Avoid violent responses in objects. 

4. Security requires human judgement until AI develops further. 


Drop files here or click to select

You can upload files of up to 20MB using this form.