We were quite limited in the number of key terms we are able to provide samples for within the scope of this project, leaving out a range of queries that participants would presumably have been able to submit as inputs. The challenge of recognizing a word used within a conversational phrase as opposed to detecting between isolated "wake" words immediately surfaced. For a proof-of-concept demo however, this was acceptable and we settled on the words “job” and “purpose” to train our model on. Another limitation stemmed from the diversity of voices and contexts the model was trained on– we attempted to include as many Noise samples as possible and record samples from a range of individuals and settings. Training with more accurate noise samples to the context of use would improve reliability.
We began this project by exploring a variety of interest areas and questions around the theme of machine intelligences, data collection, and predictive machine outcomes. From the outset, we wanted to question not only the assumptions and beliefs embedded in technology and material culture, but the deterministic influence and constraints set by the stories we tell ourselves. In particular, we honed in on this current representation and perception of machine outcomes as entirely objective, inherently dependable. How do devices and machine intelligences present themselves? If we blindly believe and follow, do they become the ultimate deterministic, authoritative source? What other ways of enacting and bringing our desired futures into being can technology mediate?
Is this a good/useful/informative piece of content to include in the project? Have your say!
You must login before you can post a comment. .