Differences in AI vs. Human learning and how machines can get good at it
Current LLMs are fed the whole conversation as input (work in the “language” space). Machines are good at memory. Humans don’t remember every single word previously said in a convo. Instead they form a representational sketch of what was said, the big ideas (work in the “idea” space). Maybe look into a model that can do that?
Early human learning is about generalization and association. Association: baby is shown mother while word “mom” is spoken. Generalization: rules are abstracted from the specific examples, e.g. “He eats” ⇒ <noun> then <verb>.
I feel like a big possible reason LLMs such at system 2 thinking is that they can’t actually interact with the world and get feedback.