Over the last three years, Wintermute’s taught me a lot about what I expected to learn about artificial intelligence. The biggest takeaway is that it’s one of the biggest moving targets in computer science there is. From speech synthesis to chess playing mastery, we constantly demote what would classify a computer as intelligent and push the bar higher for something that’d be recognized as an embodiment of artificial intelligence. That’s a good and bad thing. It’s good because it’s driving more and more researches, enthusiasts and tinkerers into the field. It’s bad because for those with less passion, it can drive them away from the field altogether.
I was taking some time to review some four year old code. Most of it was related to the earlier form of Wintermute1 that was supposed to take the world by storm. It also included a potential Qt library that’d make it easier to use PocketSphinx in Qt applications but at the time, I punted on it. Reviewing that code made me realize how little I actually knew I knew about the field and how much further I had to go today. I don’t see it as a mistake or a misstep. I’m actually really glad I made all of these wild assumptions about the world, that one could just load an file loaded with Web ontology references, smack together a part-of-speech tagger and automatically have binaries that could understand the English I rolled into it. To a point, it worked. I managed to get the phrase “We are boys” correctly tagged using the grammar and node rules over weeks of tweaking. But that was weeks of unaided, mostly experimenting and tweaking of said rules and referencing to books I had here on English grammar typically used for writing. That went only so far before I kind of gave up and got a full-time job that prohibited me from spending a lot of time on things like this.
Like I mentioned, this was a good stepping stone for me and actually had me realize that the field of study I was most interested in for artificial intelligence was the one that related to linguistics2. I like it since it’s part of a routine of everyday life; speaking and interpreting messages using phonemes and symbols, it’s amazing and almost impossible for human history to persist without it. So from a stance, you’d see me diving quite deeply into the world of natural language processing and computational linguistics.
To say that I’m an “expert” when it comes to artificial intelligence would be discrediting the works of Minsky and the likes, the cats who gave us the algorithms, theorems and basis of thought for what we use as AI on the regular. Labelling myself as an enthusiast that’s willing to learn a shit ton more is definitely more up my alley. And if there’s one way I’ve managed to learn things, it’s been via tinkering, breaking and building.