Sunday 19 February 2023

parts of speech (10. 558)

Although seeming a relatively straightforward challenge for a language model we learn via the new shelton wet/dry that because GPT-2 and similar chatbots work sequentially and is only capable of predicting only the next word, one word at a time, researchers didn’t really know how the latest cadre of neural networks had a pretty good track record of knowing when to use the English indefinite article “a” over “an,” which really points to what an inscrutable black box that artificial intelligence presents and makes me wonder how it would handle languages with a high level of declinations or synthesis. Computer scientist have isolated a “neuron” of sorts—something that they didn’t program—that explains this anticipatory capacity.