5 years ago, Charlie Gerard learned to code in 12 weeks. Today she is trending on Twitter as the girl who controls the UI of a web page with the power of thought. Charlie told DevStyleR how she developed her project!
What projects are you currently working on and what programming languages and tools are you using?
At work, I am currently part of the team in charge of building the front-end for a product called Jira. The part of the codebase I am working on uses modern tools such as React.js, CSS in JS, Storybooks, etc…
On my personal time, I build prototypes using different kinds of technologies such as machine learning, hardware, web bluetooth, 3D in the browser, and anything I feel like experimenting with.
How did you start coding and how have you progressed over the years?
— Charlie Gerard 🏳️🌈 (@devdevcharlie) January 29, 2018
What technologies (languages, frameworks, libraries and tools) did you use for the development of the Mind-controlled UI?
For which parts of the projects did you use libraries and for which you did it all from scratch?
To build the JS framework, I used the official Emotiv C++ SDK and then wrote a Node.js addon around it from scratch.
What were the main dev challenges that you’ve encountered?
Personally, the main challenge was that, when I built the framework, I didn’t know any C++. I had to dig into the SDK to try and understand how it worked so that I could write the JS framework around it. It took me quite a long time to get it to work because of that.
How much time for training does the software need in order to “adjust” to a new person?
Before using my framework, a user would have to download some Emotiv software to do the recording of the brain waves and the training of the commands.
This software allows you to select certain “thoughts” commands that you can then train for 8 seconds each. You can do it once or multiple times if you want. Once the training step is done, a user file is saved that you can load into your application so the framework can compare live data against the user’s training data.
Do you use a specific kind of hardware? What kind of sensors do you use?
I have mainly used an Emotiv Epoc sensor but there are a few other brain sensors available. I have also experimented with a Neurosky and recently bought an Open BCI device.
Otherwise, I also know of the Muse headset and soon, a new device called Notion.
For my 1st internal Atlassian hackathon, I spent the last 24h trying to implement my brain sensor framework w/ react-beautiful-dnd:
😉 Select tasks w/ facial expressions
🧠 Move them w/ thoughts
— Charlie Gerard 🏳️🌈 (@devdevcharlie) June 28, 2019
What is your “conference talk from hell” story?
I think one of the first times I tried to demo my brain sensor experiments on stage, absolutely nothing worked. Doing this kind of demos live can be quite difficult because your state of mind is not the same as when you train the device in a more quiet and less stressful environment. The audience was still very supportive but I was really disappointed at myself because I was really looking forward to showing people that it is possible to build brain-computer interfaces with web technologies.
Since then, I’ve given the same talk a few times and demos went a bit better!
I gave my talk about machine learning for front-end devs @sydjs tonight and had to open up with this amazing quote from @seldo. I’ve never related to a quote that much in my life 😂 pic.twitter.com/9IyUL1SqkA
— Charlie Gerard 🏳️🌈 (@devdevcharlie) July 17, 2019
What are your future dev plans?
Honestly, I have no idea! I have a few other ideas of side projects I started working on but nothing really concrete yet. What matters to me the most is that I’m always looking to learn new things!