Orange House Cat

There’s a weird edge case to using Large Language Models (LLMs) trained by humans that I’ve seen come up in Google’s reCAPTCHA: Sometimes it identifies an object that resembles the one they think it is, but it’s not actually the one they think it is. Here’s an example: I’ve seen a few reCAPTCHAs come up like this with one of those motorbikes that aren’t quite a full motorcycle (so they look bicycle-ish), but they are definitely NOT a bicycle....

<span title='2024-02-15 19:42:49 +0000 UTC'>February 15, 2024</span>&nbsp;·&nbsp;1 min&nbsp;·&nbsp;206 words&nbsp;·&nbsp;Jordan Finnigan