Back to Parent

Context

We were inspired by projects like BIY™- Believe it Yourself, which raised questions around the issue of arbitrary data collection practices and training of machine learning models, as the technology becomes more easily accessible.

Another inspiration comes from the fact that large language models like GPT3 or 4 are “high capability, but low alignment”, which means that they are highly capable of generating human-like text, but are less good at producing output that is consistent with human’s desired outcome. However, reaction from the general public reveals blind trust or overhype of such models, overshadowing their important limitations. This was something we wanted to question with the introduction of superstition (compatibility test and fortune generation) as well.


Content Rating

Is this a good/useful/informative piece of content to include in the project? Have your say!

0