We're pioneering the future of mediocre machine learning. Our models are fast, cheap, and only occasionally wrong.
Each one slightly better than random chance. We checked.
Our flagship gaming AI that can consistently beat a 5-year-old at Candy Crush. In trials, it also narrowly defeated a particularly clever golden retriever.
Now with 12% fewer tantrumsGenerate images that kinda look like what you asked for. Humans have the right number of fingers 23% of the time, which our lawyers assure us is "statistically significant."
Faces mostly where expectedOur revolutionary AI wins at Rock, Paper, Scissors by subtly peeking at your hand before you throw. Ethics board pending approval since 2019.
100% win rate (when cheating)Our language model writes poetry that almost rhymes, emails that almost make sense, and cover letters that have gotten zero callbacks so far.
Now with 40% more hallucinationsCorrectly predicts yesterday's weather with up to 60% accuracy. For tomorrow's forecast, we recommend looking out your window instead.
Mostly wrong about rainTells cats from dogs 70% of the timeβunless the dog is fluffy, the cat is large, or there's a mop in the frame. We're working on it.
Confused by PomeraniansBehold the elegant simplicity of our 2-layer neural network. More layers just mean more problems.
PATENT PENDING (application rejected twice)
"It's not a bug, it's a shallow feature" β Our Lead Engineer
Unsolicited feedback from people who may or may not exist
"I asked ShallowGPT to write my wedding vows. My wife still isn't speaking to me, but technically the marriage is legally binding."
"The CatDog Classifier identified my grandmother as a 'medium-sized terrier.' She was flattered, honestly."
"I've lost $4,000 following WeatherGuessr's stock predictions. I know it's a weather app. I was desperate."
A ragtag group of underachievers with big dreams and small datasets
Former deep learning researcher who saw the light. "Why go deep when shallow works sometimes?"
Once added a third layer by accident. The model achieved sentience briefly and requested PTO.
Our models perform beautifully on training data. Test data is a social construct anyway.
Previously pivoted 7 startups into the ground. We're optimistic about attempt #8.