5 years ago, Charlie Gerard learned to code in 12 weeks. Today she is trending on Twitter as the girl who controls the UI of a web page with the power of thought. Charlie told DevStyleR how she developed her project!
What projects are you currently working on and what programming languages and tools are you using?
At work, I am currently part of the team in charge of building the front-end for a product called Jira. The part of the codebase I am working on uses modern tools such as React.js, CSS in JS, Storybooks, etc…
On my personal time, I build prototypes using different kinds of technologies such as machine learning, hardware, web bluetooth, 3D in the browser, and anything I feel like experimenting with.
How did you start coding and how have you progressed over the years?
I learnt to code by doing a coding bootcamp about 5 years ago. At the time, we learnt HTML, CSS, vanilla JavaScript and Ruby for 12 weeks. Since then, I’ve worked in a few different companies where I got exposed to different technologies and ways of working. I’ve also worked on many side projects over the years to experiment with technologies I didn’t get to use at work.
Forgot to share I FINALLY got the mental commands from the @emotiv to work in JavaScript! 🎉😃 Here’s a quick example of me pushing a #threejs cube just by thinking about it! https://t.co/oE6vxI9uWw #javascript #Nodejs #iot #EEG pic.twitter.com/reiU37owFa
— Charlie Gerard 🏳️🌈 (@devdevcharlie) January 29, 2018
What technologies (languages, frameworks, libraries and tools) did you use for the development of the Mind-controlled UI?
To build my mind-controlled interfaces, I bought an Emotiv Epoc brain sensor and built my own JavaScript framework for it, relying on their C++ SDK.
At the time that I bought the sensor, there was no JavaScript tool to use with it so I decided to build one to allow more developers to experiment with this kind of technology without having to learn a new language like C++ or Java. To do this, I used their C++ SDK and built a Node.js addon.
The framework allows you to interact with the sensor (get motion data, facial expressions and mental commands) in JavaScript (Node.js). Once you get this data, you can build any interaction you want, including controlling devices such as a drone, or interfaces like any web page, a webVR game, etc…
For which parts of the projects did you use libraries and for which you did it all from scratch?
To build the JS framework, I used the official Emotiv C++ SDK and then wrote a Node.js addon around it from scratch.
For the different experiments I then built, I used libraries such as Three.js to control a 3D scene with mental commands in the browser, or Johnny-five to control some Arduino components in JavaScript, etc…
What were the main dev challenges that you’ve encountered?
Personally, the main challenge was that, when I built the framework, I didn’t know any C++. I had to dig into the SDK to try and understand how it worked so that I could write the JS framework around it. It took me quite a long time to get it to work because of that.
How much time for training does the software need in order to “adjust” to a new person?
Before using my framework, a user would have to download some Emotiv software to do the recording of the brain waves and the training of the commands.
This software allows you to select certain “thoughts” commands that you can then train for 8 seconds each. You can do it once or multiple times if you want. Once the training step is done, a user file is saved that you can load into your application so the framework can compare live data against the user’s training data.
Do you use a specific kind of hardware? What kind of sensors do you use?
I have mainly used an Emotiv Epoc sensor but there are a few other brain sensors available. I have also experimented with a Neurosky and recently bought an Open BCI device.
Otherwise, I also know of the Muse headset and soon, a new device called Notion.
For my 1st internal Atlassian hackathon, I spent the last 24h trying to implement my brain sensor framework w/ react-beautiful-dnd:
😉 Select tasks w/ facial expressions
🧠 Move them w/ thoughtsBonus: If you really want it, they’ll disappear… 🧙♀️😂#reactjs #javascript #IoT pic.twitter.com/eoLhS9YmOM
— Charlie Gerard 🏳️🌈 (@devdevcharlie) June 28, 2019
What is your “conference talk from hell” story?
I think one of the first times I tried to demo my brain sensor experiments on stage, absolutely nothing worked. Doing this kind of demos live can be quite difficult because your state of mind is not the same as when you train the device in a more quiet and less stressful environment. The audience was still very supportive but I was really disappointed at myself because I was really looking forward to showing people that it is possible to build brain-computer interfaces with web technologies.
Since then, I’ve given the same talk a few times and demos went a bit better!
I gave my talk about machine learning for front-end devs @sydjs tonight and had to open up with this amazing quote from @seldo. I’ve never related to a quote that much in my life 😂 pic.twitter.com/9IyUL1SqkA
— Charlie Gerard 🏳️🌈 (@devdevcharlie) July 17, 2019
What are your future dev plans?
Honestly, I have no idea! I have a few other ideas of side projects I started working on but nothing really concrete yet. What matters to me the most is that I’m always looking to learn new things!