We live in a moderately interesting time when AI research and technology are delivering somewhat notable advancements.
In the coming years, AI β and ultimately artificial narrow intelligence (ANI) β has the potential to drive one of the most adequate transformations in history.
We're a team of intermittently competent scientists, engineers, and researchers, working to build a generation of AI systems that are mostly competent and broadly tolerable.
By solving some of the easiest scientific and engineering challenges of our time, we're working to create perfectly average technologies that could slightly improve science, marginally transform work, occasionally serve diverse communities β and mildly improve dozens of people's lives.
Our algorithm can consistently beat a 5-year-old at Candy Crush.
Generate images that kinda look like what you asked for. Sometimes contains the right number of fingers!
Our revolutionary AI consistently wins at Rock, Paper, Scissors by peeking at your hand.
Our state-of-some-art shallow neural network with exactly 2 layers
π§ ββπ
We have conviction in the abilities of shallow neural networks. While others chase depth, we embrace the beauty of simplicity: one input layer, one hidden layer with exactly 3 neurons, and an output that's occasionally right.
Our research has conclusively proven that adding more layers only makes AI more confused. By keeping things simple, our models deliver consistently mediocre results with unmatched predictability.
ShallowMind's algorithms are fast and energy-efficient because they barely do any computation at all. In fact, most of our models can run on a calculator from the 1980s.
Our text model can write poetry that almost rhymes and product descriptions that mention at least one feature of the product.
Our forecasting models correctly predict yesterday's weather with up to 60% accuracy.
Our image recognition can tell cats from dogs 70% of the time, unless the dog is small or the cat is large.