Orange House Cat
There’s a weird edge case to using Large Language Models (LLMs) trained by humans that I’ve seen come up in Google’s reCAPTCHA: Sometimes it identifies an object that resembles the one they think it is, but it’s not actually the one they think it is. Here’s an example: I’ve seen a few reCAPTCHAs come up like this with one of those motorbikes that aren’t quite a full motorcycle (so they look bicycle-ish), but they are definitely NOT a bicycle....