From Linguistics to Neural Networks
My way into technology began over a decade and a half ago and was a long detour. I started coding at 14 — side-projects, small experiments, some freelance websites, a Visual Basic order system for a friend's restaurant, a learning app for other students while I was working as tutor at university. The usual self-taught patchwork. But for years it stayed a clandestine hobby running alongside what I considered my real studies: language and cognition.
I studied Linguistics and Cognition, first in Bari and then in Göttingen, spending most of my twenties on questions about how human language works, synchronically and diachronically. It was in that academic context, of all places, that I built my first neural networks, inspired by Paul Smolensky's work on Gradient Symbolic Computation. Modelling phonological constraints with vector-space representations turned out to be the bridge.
I find the symmetry quietly satisfying. The languages I now spend my days working with are programming languages, but they are still languages. Most of what I learned about how meaning is built in human grammar transfers surprisingly well to how systems acquire meaning at scale.