• 0 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle

  • Chiming in with more context, my PhD was in neuroscience and I worked in a language lab. As others have stated, there is a critical window for learning a language. The biology behind it is fascinating.

    As early as about 9 months of age, your brain begins to decide what speech sounds are important to you. For example, in Japanese the difference between /r/ and /l/ sounds doesn’t matter, but in English it does. Before 9 months, most babies can tell the difference between the two sounds, but babies living in Japanese-speaking environments (without any English) LOSE this ability after 9ish months!

    Language is more than just speech sounds, though. Imagine all these nuances of language - there are critical moments where your brain just decides to accept or reject them, and it’s coded somewhere in your DNA.



  • If you’re working on a budget like I was when starting out on my own, I recommend your first purchase to be a bed frame. You can use Ceaigslist / FB marketplace to find some really cheap used options. From there, you can start buying (used) furniture that matches the bed frame. Personally, I needed a nightstand immediately after the bed frame because I wanted to put my glasses somewhere.




  • The consent process for clinical trials has a ton of guidance (ICH GCP), but the onus is on the clinical monitors and hospitals to make sure it’s done correctly. Many trials now generate supporting documentation in which hospital staff are required to describe the circumstances in which consent was acquired. If the documents are generated, then it’s auditable.

    Things get a bit hairy when you look at trials in Alzheimer’s and other cognitive disorders, because the patient may not be coherent enough to withdraw from the trial. In those cases, a legal guardian is responsible for the decision.



  • The article brings up some great points, some of which that I, an industry insider, weren’t even aware of, especially the historical context surrounding the AIDS epidemic. I’ll jump into the thread to critique an issue within the article.

    One of the four pillars recommended by the FDA (control groups) are great in theory but can lead to very real problems in practice, specifically within indications that have an unmet treatment need or are exceptionally rare conditions.

    If you have a disease that is 99% fatal but has 0 standard of care treatment options, is it ethical to ask a participant to enroll in a clinical trial and potentially not receive the study treatment/be on placebo? Or, what if the trial involves an incredibly invasive procedure like brain surgery - is it ethical for people to do a placebo procedure? Food for thought - and an explanation for why so few trials meet all four criteria proposed by the FDA.

    Happy to answer questions about the industry if anyone has them.


  • Probably not. To get input from the brain, you need to place a sensor near it. But this device doesn’t get inserted into the brain, it sits in the scalp.

    There are plenty of non-invasive brain reading technologies though, like EEG and near-infrared spectroscopy. They’re just big and bulky with low resolution.

    Edit: in the case of prosthetics, it depends on where the disconnect is. If the brain and spinal cord are intact and the issue is in the periphery, yes, you can read the signal far away from the brain (namely the spinal cord) and then work from there.


  • The motor cortex is located in about the same spot in everyone, to my knowledge - I don’t know of any reported exceptions. The pre-central gyrus. Within, motor neurons are organized in specific regions that control specific body parts. Again, I don’t know of any reported exceptions - my understanding is everyone’s motor cortex has the same organization. It’s known as the cortical homunculus. https://en.wikipedia.org/wiki/Cortical_homunculus#Motor_homunculus%3Fwprov=sfla1

    So by reading output from a small group of neurons, yes, you could control a prosthetic limb. It’s been done a few times, actually! But, you typically need more precision than comes from an EEG electrode, so all the examples I can think of are using invasive electrodes.

    In fact, the sensory system of the brain has a very similar organization - along the postcentral gyrus, and the same stereotyped organization within. If you could stimulate the correct region of the sensory cortex, you could create a prosthetic that allows you to feel.

    There are some more technical limitations though - there’s different types of sensation (e.g., pain, temperature, proprioception (position in space), texture, etc.) that are controlled by different receptors in skin and have different wires connecting to the brain. You’d have to be very careful about what you stimulate. And, any implant that delivers electricity to the brain, with our current technology, has a limited lifespan due to the brain’s immune system rejecting the implant (this is the aspect I studied).








  • See Alk’s comment above, I touched on medical applications.

    As for commercial uses, I see very few. These devices are so invasive, I doubt they could be approved for commercial use.

    I think the future of Brain Computer Interfacing lies in Functional Near Infrared Spectroscopy (FNIRS). Basically, it uses the same infrared technology as a pulse oximeter to measure changes in blood flow in your brain. Since it uses light (instead of electricity or magnetism) to measure the brain, it’s resistant to basically all the noise endemic to EEG and MRI. It’s also 100% portable. But, the spatial resolution is pretty low.

    HOWEVER, the signals have such high temporal resolution. With a strong enough machine learning algorithm, I wonder if someone could interpret the signal well enough for commercial applications. I saw this first-hand in my PhD - one of our lab techs wrote an algorithm that could read as little as 500ms of data and reasonably predict whether the participant was reading a grammatically simple or complex sentence.

    It didn’t get published, sadly, due to lab politics. And, honestly, I don’t have 100% faith in the code he used. But I can’t help but wonder.


  • A traditional electrode array needs to be as close to the neurons as possible to collect data. So, straight through the dura and pia mater, into the parenchyma where the cell axons and bodies are hanging out. Usually, they collect local data without getting any long distance information - which is a limiting factor to this technology.

    The brain needs widespread areas to work in tandem to get most complicated tasks done. An electrode is great for measuring motor activity because those are pretty localized. But, something like memory and language? Not really possible.

    There are electrocorticographic devices (ECoG) that places electrodes over a wide area and can rest on the pia mater, on the surface of the brain. Less invasive, but you still need a craniotomy to place the device. They also have less resolution.


  • The most practical medical purpose I’ve seen is as a prosthetic implant for people with brain/spinal cord damage. Battelle in Ohio developed a very successful implant and has since received DARPA funding: https://www.battelle.org/insights/newsroom/press-release-details/battelle-led-team-wins-darpa-award-to-develop-injectable-bi-directional-brain-computer-interface. I think that article over-sells the product a little bit.

    The biggest obstacle to invasive brain-computer implants like this one is their longevity. Inevitably, any metal electrode implanted in the brain gets rejected by the immune system of the brain. It’s a well-studied process where a glial scar forms, neurons move away from the implant, and the overall signal of the device decreases. We need advances in biocompatibility before this really becomes revolutionary.

    ETA: This device avoids putting metal in the brain and instead the device sends axons into the brain. Certainly a novel approach which runs into different issues. The new neurons need to be accepted by the brain, and they need to be kept alive by the device.

    If they move the cell bodies into the brain and then had the device house axons and dendrites (neuron input and output), they could maybe let the brain keep the device alive. But that is a much more difficult installation procedure