Training mRNA Language Models Across 25 Species for $165
147 points | by maziyar 5 days ago
15 comments
- seamossfet 2 days agoThe problem with models like this is they're built on very little actual training data we can trace back to verifiable protein data. The protein data back, and other sources of training data for stuff like this, has a lot of broken structures in them and "creative liberties" taken to infer a structure from instrument data. It's a very complex process that leaves a lot for interpretation.
On top of that, we don't have a clear understanding on how certain positions (conformations) of a structure affect underlying biological mechanisms.
Yes, these models can predict surprisingly accurate structures and sequences. Do we know if these outputs are biologically useful? Not quite.
This technology is amazing, don't get me wrong, but to the average person they might see this and wonder why we can't go full futurism and solve every pathology with models like these.
We've come a long way, but there's still a very very long way to go.
- stardust2 2 days agoHow do we get more verifiable protein data? So even if we had better data, we don't yet understand how the structure impacts the biology?
- maziyar 5 days ago
- pfisherman 2 days agoNice work! Here is an article you may find helpful if you have not already come across it.[0]. You may also want to consider benchmarking against some non ML methods.[1]
- xyz100 2 days agoWhat makes this dataset or problem worth solving compared to other health datasets? Would the results on this task be broadly useful to health?
- CyberDildonics 2 days agoWhat other "datasets" are you talking about? How do you "solve a dataset" ?
- xyz100 1 day agoYou solve a dataset when you learn what there is to learn about the phenomenon of interest. The limit of such phenomenon is “cure all disease”, and clearly this is not solving that.
- CyberDildonics 1 day agoWhat are you talking about? "the phenomenon of interest"? There is nothing you wrote in either comment that makes sense.
What is a "dataset" that has been "solved" and what did the program do that 'solved' it?
- xyz100 1 day agoMNIST (the number classification task) has been “solved” a billion times and it is hard to imagine any subsequent advances there as scores using a variety of methods have hit the saturation point of accuracy. Any further improvements are likely overfitting to noise. Therefore, we know that it is easy to detect handwritten numbers. However, we may not know how to detect other things as well, like reading an MRI. Those datasets/tasks are clearly different and require different techniques. Training an LLM is likewise different.
- CyberDildonics 1 day agohas been “solved” a billion times
If it was really solved, wouldn't it just need to happen once?
You think classifying handwriting of 10 numbers is the same as this that took 55 hours of GPU time for someone to go through?
I have no idea what point you're trying to make and I can't tell if you do either. You were talking about "solving" other "health datasets" but you can't even come up with one or what that means.
- xyz100 9 hours agoIf you want to be literal with language, then do you ever really “solve” anything? Even tying your shoes is not solved. One day you may tie them better, but for practical purposes we can say it is solved.
Likewise, you can spend 55 hours of GPU time to produce very different things. Can those 55 hours cure cancer? Definitely not. Can it pick up correlations with a small subset of proteins that are perhaps not representative of practical problems? Probably. Can it learn a pattern to tie your shoes, given all your life experiences tying them? Sure.
I asked the question to determine what is the impact of the task and dataset. Curing cancer is huge, tying shoes is not. What are the strengths and limitations?
- CyberDildonics 8 hours agoIf you want to be literal with language, then do you ever really “solve” anything?
You are the one who said it and you can't even explain what you meant, you just get mad that anyone would ask.
- xyz100 7 hours agoSince I am hitting the reply depth: You “solve” a dataset or task when you translate some model into actual real world problems by creating a model that actually “works” (not just high accuracy). What is otherwise the point of training the model other than writing blog posts? Second to that, you can train a model that performs well on the dataset but is less useful in the real world.
This is a health dataset, there are many inputs and outputs to health (e.g., cell level, protein level, tumors, organs, etc.). In this case, it is mRNA focused, which is a broad category that translates to potentially immune responses like vaccines (exactly what kind of therapy, I’m not sure other than “25 species”). Once the model is trained, you can use it to solve real problems, perhaps to develop a therapy that makes its way to clinical trials and eventually actually treats some disease. The model by itself is useless without the ability to have that impact.
So for other examples, take any disease (e.g., Covid19), create a dataset to mirror that problem using some technique (e.g., Covid19 mRNA prediction of some sort), and solve it to create a treatment (e.g., get a safe and effective vaccine). Obviously, you can say the vaccine can be improved so it is not “solved”, but most people would be quite happy with a “almost cure for cancer” even if it wasn’t literally optimal (we don’t even know if a cure for cancer is possible).
My suggestion and question to the author is to outline what is the implications of the work rather than focusing on accuracy statistics that are meaningless without such context.
- basyt 1 day agoyeah lol no shit. lets not get bothered by reactionaries...
- nradclif 1 day ago"Complete results, architectural decisions, and runnable code below."
This is a weird post, there doesn't seem to be any "below" here. Another comment linked the article: https://huggingface.co/blog/OpenMed/training-mrna-models-25-...
- justinclift 1 day agoYeah. Things like "Complete results, architectural decisions, and runnable code below." is literally how AI outputs stuff, so I'd expect the post was AI written too. :(
- rubicon33 2 days agoCan someone explain what one might use this model for? As a developer with a casual interest in biology it would be fun to play with but honestly not sure what I would do
- colechristensen 2 days agoYou can get your feet wet with genetic engineering for surprisingly little money.
This guy shows a lot of how it's done: https://www.youtube.com/@thethoughtemporium
Basically you can design/edit/inject custom genes into things and see real results spending on the scale of $100-$1000.
- com2kid 2 days agoWe actually did this in my highschool genetics class back in 1999! We made bacteria change color by splicing in a gene. Awesome stuff.
The (public!) school had a grant from one of Seattle's biotech boom companies.
- someuser54541 2 days agoIs there something like this in text/readable format?
- _zoltan_ 2 days agoMy main concern is using fungi. If it ends up in my lungs I'm most likely screwed, right?
- nurettin 2 days agoYes, but most students produce their best work while infected.
- colechristensen 2 days agoThis is the classic meme https://www.reddit.com/r/labrats/comments/mmv2ig/lab_strains...
Lab strains of things tend to be extremely sensitive and not human adapted. You shouldn't study and modify human-infecting organisms in your basement anyway. While you shouldn't ignore protective equipment and proper procedure... paranoia about infecting yourself with a lab leak isn't warranted.
- _zoltan_ 1 day agoI'd love to experiment with this stuff, just literally have no idea how it would be safe to start.
- jazzpush2 1 day agoA Codon-based model is cool. I know NVIDIA is building quite a large one.
At GTC they showed an SAE they built on a smaller version of it, allowing you to see what their model learned: https://research.nvidia.com/labs/dbr/blog/sae/
- dhruv3006 2 days agoInteresting work - Looks like AI for science is having it's day right now.
- khalic 2 days ago> In Progress: CodonJEPA
JEPA is going to break the whole industry :D
- digdugdirk 2 days agoCan you explain this? I haven't heard of JEPA, and from a quick search it seems to be vision/robotics based?
- khalic 2 days agoIt’s a self supervised learning architecture, and it’s pretty much universal. The loss function runs on embeddings, and some other smart architectural choices allover. Worth diving into for a few hours, Yann LeCun gives some interesting talks about it
- lukeinator42 2 days ago
- colingauvin 2 days agoHN's blindspots never cease to amaze me.
I am a structural biologist working in pharmaceutical design and this type of thing could be wildly useful (if it works).
- justinclift 1 day agoBlind spot?
- simianwords 2 days agoWhat makes these Domain specific models work when we don’t have good domain models for health care, chemistry, economics and so on
- colechristensen 2 days ago>we don’t have good domain models for health care, chemistry, economics and so on
Who says we don't?
- simianwords 2 days agoExamples please?
- colechristensen 2 days agoNo, it's really simple to search for domain specific models being used "in production" all over the place
- simianwords 2 days agoI didn’t find a single one that outperforms a general model.
- colechristensen 2 days agoOk, alphafold.
- simianwords 2 days agoIt’s not a large language model
- yieldcrv 2 days agoDistributing the load on this will probably be infinitely more useful than “folding at home”
- agenexus 23 hours ago[flagged]
- HocusLocus 2 days agogray goo of the future
- skyskys 2 days agohmmmm seems like some fake hype.