Readit News logoReadit News
teekert · 2 months ago
We are not “in the nanopore era of sequencing”. We are (still) firmly in the sequencing by synthesis era.

Yes it requires chopping the genome opening small(er) pieces (than with Nanopore sequencing) and then reconstructing the genome based on a reference (and this has its issues). But Nanopore sequencing is still far from perfect due to its high error rate. Any clinical sequencing is still done using sequencing by synthesis (at which Illumina has gotten very good over the past decade).

Nanopore devices are truly cool, small and comparatively cheap though, and you can compensate for the error rate by just sequence everything multiple times. I’m not too familiar with the economics of this approach though.

With sbs technology you could probably sequence your whole genome 30 times (a normal “coverage”) for below 1000€/$ with a reputable company. I’ve seen 180$, but not sure if I’d trust that.

Metacelsus · 2 months ago
>you can compensate for the error rate by just sequence everything multiple times.

Usually, but sometimes the errors are correlated.

Overall I agree, short read sequencing is a lot more cost effective. Doing an Illumina whole genome sequence for cell line quality control (at my startup) costs $260 in total.

bonsai_spool · 2 months ago
> But Nanopore sequencing is still far from perfect due to its high error rate. Any clinical sequencing is still done using sequencing by synthesis (at which Illumina has gotten very good over the past decade).

There is no reason for Nanopore to supplant sequencing-by-synthesis for short reads - that's largely solved and getting cheaper all the while.

The future clinical utility will be in medium- and large-scale variation. We don't understand this in the clinical setting nearly as well as we understand SNPs. So Nanopore is being used in the research setting and to diagnose individuals with very rare genetic disorders.

(edit)

> We are not “in the nanopore era of sequencing”. We are (still) firmly in the sequencing by synthesis era.

I also strongly disagree.

SBS is very reliable but it's common (if Toyota is the most popular car, does that mean we're in the Toyota internal combustion era? Or can Waymo still matter despite its small footprint?).

Novelty in sequencing is coming from ML approaches, RNA-DNA analysis, and combining long- and short-read technologies.

teekert · 2 months ago
I agree with you. Long reads lead to new insights and over time to better diagnoses by providing better understanding of large(r) scale aberrations, and as the tech gets better will be able to do so more easily. But is really not there yet. It’s mostly research and somehow it’s not really improving as much as hoped, I get the feeling.
Onavo · 2 months ago
You can get it pretty damn cheap if you are willing to send your biological data overseas. Nebula genomics and a lot of other biotechs do this by essentially outsourcing to China. There's no particular technology secret, just cheaper labor and materials.
vintermann · 2 months ago
Can you trust it though? It'd be trivially easy to do a 1x read, maybe 2x, and then fake the other 28 reads. And it'd be hard to catch someone doing this without doing another 30x read from someone you trust. There's famously a lot of cheating in medical research, it would be odd if everyone stopped the moment they left academia (there have been scandals with forensic labs cheating too, now that I think about it).
gillesjacobs · 2 months ago
They save money by cheap labour and batching large quantities for analysis. For the consumer this means long wait times and potentially expired DNA samples.

I tried two samples with Nebula, waited 11 months total. Both samples failed. Got a refund on the service but spent 50usd in postage for the sample kit.

Deleted Comment

jefftk · 2 months ago
> We are (still) firmly in the sequencing by synthesis era.

It really depends what your goals are. At the NAO we use Illumina with their biggest flow cell (25B) for wastewater because the things we're looking for (ex: respiratory viruses) are a small fraction of the total nucleic acids and we need the lowest cost per base pair. But when we sequence nasal swabs these viruses are a much higher fraction, and the longer reads and lower cost per run of Nanopore make it a better fit.

the__alchemist · 2 months ago
I guess this depends on the applciation. For whole human genome? Not nanopore era. For plasmids? Absolutely.

I'm a nobody, and I can drop a tube into a box in a local university, and get the results emailed to me by next morning for $15USD. This is due to a streamlined nanopore-based workflow.

celltalk · 2 months ago
This is wrong, a lot of diagnostic labs are actually going for nanopore sequencing since its prep is overall cheaper compared to alternatives. Also the sensitivity for related regions are usually matching qPCR, and it can give you more information such as methylation on top of that.

A recent paper on classifying acute leukemia via nanopore: https://www.nature.com/articles/s41588-025-02321-z/figures/8

The timelines are exaggarated but still it works and that’s what matters in diagnostics.

BobbyTables2 · 2 months ago
I’ve always wondered how the reconstruction works.

It would be difficult to break a modest program into basic blocks and then reconstruct it. Same with paragraphs in a book.

How does this work with DNA?

__MatrixMan__ · 2 months ago
You align it to a reference genome.

Its like you have an intact 6th edition of a textbook, and you have several copies of the 7th edition sorted randomly with no page numbers. Programs like BLAST will build an index based on the contents of 6 and then each page of 7 can be compared against the index and you'll learn that for a given page of 7 it aligns best at character 123456 of 6 or whatever.

Do that for each page in your pile and you get a chart where on the X axis is the character index of 6 and on the Y axis is the number of pages of 7 which were aligned there. The peaks and valleys in that graph can tell you about the inductive strength of your assumption that a given read is aligned correctly to the reference genome (plus you score it based on mismatches, insertions and gaps).

So if many of the same pages were chosen for a given locus, yet the sequence differs, then you have reason to trust that there's an authentic difference between your sample and the reference in that location.

There's a lot of chemical tricks you can do to induce meaningful non-uniformity in this graph. See ChIP-Seq for instance, where peaks indicate methyl markers which typically correspond with a gene that was enabled for transcription when the sample was taken.

If you don't have a reference genome then you can run the sample on a gel to separate the sequences of different length, that'll group by chromosome. From there you've got a much more computationally challenging problem, but as long as you can ensure that it's cut at random locations before reads are taken you can use overlaps to figure out the sequence, because unlike the textbook page example, the page boundaries are not gonna line up (but the chromosome ends are):

    Mary had a little
    was white as snow
    lamb whose fleece was
    Marry had
    had a little lamb
    a little lamb
    was white
    white as snow
So you can find the start and ends based on where no overlaps occur (nothing ever comes before Mary or after snow) and then you can build the rest of the sequence based on overlaps.

If you're working with circular chromosomes (bacteria and some viruses) you can't reason based on ends but as long as you have enough data there's still gonna be just one way to make a loop out of your reads. (Imagine the above example, but with the song that never ends. You could still manage to build a loop out of it despite not having an end to work from.)

vintermann · 2 months ago
They exploit the fact that so much of our DNA is the same. They basically have the book with no typos, or rather with only the typos they've decided to call canonical.

So given a short sentence excerpt, even with a few errors thrown in, partial string matching is usually able to figure out where in the book it was likely from. Sometimes there may be more possibilities, but then you can look at overlaps and count how many times a particular variant appears in one context vs. another.

One problem is, DNA contains a lot of copies and repetitive stretches, as if the book had "all work and no play makes jack a dull boy" repeated end to end for a couple of pages. Then it can hard to place where the variant actually is. Longer reads helps with this.

jakobnissen · 2 months ago
There are two ways: Assembly by mapping and de Novo assembly.

If you already have a human genome file, you can take each DNA piece and map it to its closest match in the genome. If you can cover the whole genome this way, you are done.

The alternative way is to exploit overlaps between DNA fragments. If two 1000 bp pieces overlap with 900 basepairs, that's probably because they come from two 1000 regions of your genome that overlap by 900 baswpairs. You can then merge the pieces. By iteratively merging millions of fragments you can reconstruct the original genome.

Both these approaches are surprisingly and delightfully deep computational problems that have been researched for decades.

bonsai_spool · 2 months ago
This is very easily googled. There are new algorithmic advances for new kinds of sequencing data but this is the key (from the 70s)

https://en.wikipedia.org/wiki/Burrows–Wheeler_transform

nextaccountic · 2 months ago
If you broke a string into overlapping blocks you could easily reconstruct it. The key here is that blocks form a sliding window on the string

If blocks were nonoverlapping then yeah the problem is much harder, akin to fitting pieces of a puzzle. I bet a language model still could do it though

jltsiren · 2 months ago
The basic assumption is that most of the genome is essentially random. If you take a short substring from an arbitrary location, it will likely define the location uniquely. Then there are some regions with varying degrees of repetitiveness that require increasingly arcane heuristics to deal with.

There are two basic approaches: reference-based and de novo assembly. In reference-based assembly, you already have a reference genome that should be similar to the sequenced genome. You map the reads to the reference and then call variants to determine how the sequenced genome is different from the reference. In de novo assembly, you don't have a reference or you choose to ignore it, so you assemble the genome from the reads without any reference to guide (and bias) you.

Read mapping starts with using a text index to find seeds: fixed-length or variable-length exact matches between the read and the reference. Then, depending on seed length and read length, you may use the seeds directly or try to combine them into groups that likely correspond to the same alignment. With short reads, it may be enough to cluster the seeds based on distances in the reference. With long reads, you do colinear chaining instead. You find subsets of seeds that are in the same order both in the read and the reference, with plausible distances in both.

Then you take the most promising groups of seeds and align the rest of the read to the reference for each of them. And report the best alignment. You also need to estimate the mapping quality: the likelihood that the reported alignment is the correct one. That involves comparing the reported alignment to the other alignments you found, as well as estimating the likelihood that you missed other relevant alignments due to the heuristics you used.

In variant calling, you pile the alignments over the reference. If most reads have the same edit (variant) at the same location, it is likely present in the sequenced genome. (Or ~half the reads for heterozygous variants in a diploid genome.) But things get complicated due to larger (structural) variants, sequencing errors, incorrecly aligned reads, and whatever else. Variant calling was traditionally done with combinatorial or statistical algorithms, but these days it's best to understand it as an image classification task.

De novo assembly starts with brute force: you align all reads against each other and try to find long enough approximate overlaps between them. You build a graph, where the reads are the nodes and each good enough overlap becomes an edge. Then you try to simplify the graph, for example by collapsing segments, where all/most reads support the same alignment, into a single node, and removing rarely used edges. And then you try to find sufficiently unambiguous paths in the graph and interpret them as parts of the sequenced genome.

There are also some pre-/postprocessing steps that can improve the quality of de novo assembly. You can do some error correction before assembly. If the average coverage of the sequenced genome is 30x but you see a certain substring only once or twice, it is likely a sequencing error that can be corrected. Or you can polish the assembly afterwards. If you assembled the genome from long reads (with a higher error rate) for better contiguity, and you also have short reads (with a lower error rate), you can do something similar to reference-based assembly, with the preliminary assembly as the reference, to fix some of the errors.

Danjoe4 · 2 months ago
Nanopore is good for hybrid sequencing. You can align the higher quality illumina reads against its longer contiguous reads
Aurornis · 2 months ago
Interesting concept, but between the broken hardware and the way they gave up before getting anything useful this article was rather disappointing:

> Another problem was our flow cell was malfunctioning from the start — only 623 out of 2048 pores were working.

Is this normal for the machine? Is there a better write up somewhere where they didn’t give up immediately after one attempt?

homeless_engi · 2 months ago
Hi, believe it or not, I have actually done what the authors were attempting. I used saliva rather than blood as a source of DNA and extracted it using a Qiagen kit.

My Nanopore flow cell had nearly every pore working from the start. So I would say that is not normal. Maybe it was stored incorrectly.

LolWolf · 2 months ago
Do you have a write up somewhere? If not, it would be amazing if you wrote one!

I was planning on doing a similar thing (also with saliva) once I finished moving in and had a bit more time after conferences. (But, of course, I’d have to go through and actually figure out all of the mechanics and so on.)

jakobnissen · 2 months ago
I suspect the authors read the number of active pores during sequencing and then wrongly assumed that the non-active ones had a manufacturing defect.

In my experience, most inactive pores are due to a poorly prepared sample. I don't know why, but maybe it blocks or jams the pores.

When I analyzed Oxford nano pore data a few years ago, I found it to be very sensitive to skilled sample preparation. The data quality varied so much that I could tell which of my laborant co-workers (the experienced one or the new one) had prepared the sample by analyzing the data. So I expect that the authors garage sample prep maybe wasn't great.

Coincidentally, I had a colleague who worked on building a portable sequencing lab powered by a car battery. The purpose was to be able to identify viruses by DNA from a van in rural Central Africa or wherever. Last I talked to her, the technical bottleneck was sample prep - the computational part of the van lab wasn't too hard.

MillironX · 2 months ago
> Is this normal for the machine?

No, it's not "normal," but it is fairly common. When I worked in NGS, nearly 1/4 of flow cells were duds. ONT used to have a policy where you could return the cell and get a new one if it failed its self-test.

sbassi · 2 months ago
it depends of the sample. usually you have at least 1200, with a guaranteed of at least 800, so maybe he could ask for a refund.
refurb · 2 months ago
Like most analytical methods, the preparation of the sample is key. High quality output comes with careful sample prep so that the analytical process can run optimally.
vintermann · 2 months ago
I think it was pretty interesting in a "what would likely happen if you tried this" way. Negative results are good. A lot of technical problems is what I'd expected though, from my little experience in genetic genealogy.
dunk010 · 2 months ago
Nebula and Dante will do this for like $300, and you can get 30x coverage at every base or even 100x coverage if you pay a little more. The $1000 genome was here more than a decade ago.
zaptheimpaler · 2 months ago
I wanted to try this, but I looked into Nebula a bit more.

Nebula is facing a class action for apparently disclosing detailed genomic data to Meta, Microsoft & Google. The subreddit is also full of reports of people who never received their results years after sending their kits back. There are also concerns about the quality of sequencing and false positives in all DTC genomics testing. Given what happened with 23andme as well and all of this stuff, I'm wary of sending my genetic data to any private company.

mquander · 2 months ago
I was interested to read this because some time ago I had my genome sequenced by Nebula. If you look at the lawsuit you can see that what Nebula did was use off-the-shelf third-party analytics products on their website, including recording analytics pings when users buy a kit, and pings when users use the Nebula website to browse Nebula's high-level analysis of their traits (leaking that the user has those traits to the analytics provider.)

This behavior represents a contemptible lack of respect for users' privacy, but it's important to distinguish it from Nebula selling access to users' genomes.

https://www.classaction.org/media/portillov-nebula-genomics-...

Aurornis · 2 months ago
> There are also concerns about the quality of sequencing and false positives in all DTC genomics testing.

Even when the raw results are accurate there is a cottage industry of consultants and snake-oil sellers pushing bad science based on genetic testing results.

Outside of a few rare mutations, most people find their genetic testing results underwhelming or hard to interpret. Many of the SNPs come with mild correlations like “1.3X more likely to get this rare condition” which is extremely alarming to people who don’t understand that 1.3 times a very small number is still a very small number.

The worst are the consultants and websites that take your files and claim to interpret everything about your life or illness based on a couple SNPs. Usually it’s the famous MTHFR variants, most of which have no actual impact on your life because they’re so common. Yet there are numerous Facebook groups and subreddits telling you to spend $100 on some automated website or consultant who will tell you that your MTHFR and COMT SNPs explain everything about you and your ills, along with which supplements you need to take (through their personal branded supplement web shop or affiliate links, of course).

phyzome · 2 months ago
Yeah, the only way I would ever do DNA sequencing is anonymously...
freehorse · 2 months ago
Yeah but then basically somebody else gets ownership of your genetic data and gets the right to do anything with it in the context of their "legitimate interests". Not to mention to probability of that company getting hacked or sold, as it has already happened with some.
otherme123 · 2 months ago
Note the $2,000 bill includes the DNA extraction machinery and the sequencer itself. The sequencers that Nebula et al use are probably over 1 million $.

If you want to go even cheaper and depending of what you want, you can go for an exome instead of a WGS. And a lot of people are sequencing when they really want genotyping.

But I would not be surprised if someone is already getting $100 WGS.

stared · 2 months ago
I see 399EUR (or $466) for the cheapest variant https://www.dantelabs.com/products/whole-genome-sequencing - or do I miss anything?
sbassi · 2 months ago
yes, the difference here is that the $1000 tag is "at-scale price". You reach that price point by running multiple sequencing with a set of reactive.
subroutine · 2 months ago
Does Nebula or Dante provide BAM or just VCF?
Metacelsus · 2 months ago
Both do. I got mine through Dante, my wife through Nebula.
conradev · 2 months ago
Dante includes a BAM
sroussey · 2 months ago
What about sequencing.com?
jasongill · 2 months ago
Unfortunately, the "MinION Starter Kit" for $1000 appears to no longer be available; the link in the article to the kit goes to a 404 page, and the cheapest MinION device with flow cells is now $4950 USD
jolmg · 2 months ago
Article was posted 2 days ago...
greazy · 2 months ago
The article author probably bought the starter kit a while ago. It might explain why the pore count was low. It's a biological product so it degrades over time.
numpad0 · 2 months ago
These are by no means a new product. I think the early prototypes for these possibly predate the microUSB plug.

The brochures always showed it next to a completely non-sterile laptop, but it never made sense. It's fundamentally a bio lab equipment, just small. You probably should be wiping the package with disinfectant, use DNA-cides as needed, or follow whatever bioscience people consider the basic common sense hygiene standards.

greazy · 2 months ago
The thermocycler replacement using an electric kettle is hilarious. Thats how old school dna amplification would happen before the invention of thermocyclers.

OP you'd get better results of you centrifuge your blood, extract the white blood cells and sequence those instead of whole blood. Thats a bit tricky with a lance and a tiny device though...

jszymborski · 2 months ago
When I was in school in the early 2010s (maybe 17-18yo) our intro to biology course had us thermocycle by alternating between warm tubs of water with the use of an egg-timer. When I later used a thermocycler during my research career I really came to appreciate the little bugger (even though it caused a lot of headaches anyway)
zorgster · 2 months ago
Now you can buy a portable thermocycler with 13k rpm centrifuge all-in-one with gel electrophoresis - BentoLab for £1299 in UK. Carry case £54. (They sell a Dipstick DNA Extraction kit for £38.50, pipettes, and mastermix...)

A portable lab with a thermocycler, 13k rpm centrifuge, and gel electrophoresis https://bento.bio/bento-lab/

yichab0d · 2 months ago
we did centrifuge :) using the zymo purification kit
arjie · 2 months ago
I used Nebula (seems to be rebranded and more expensive now) for my wife and me, and for my parents and brother, and it was pretty straightforward. I paid for the 'lifetime' plan but they removed it before we did it for anyone else and it was pretty reasonable. I downloaded the FASTQ files and stuck it in an R2 bucket for myself. Nebula cost about $250 and there's a monthly $50 or something plan that's compulsory but you can cancel it right away.

If you're curious about my genome, here are my VCF files https://my.pgp-hms.org/profile/hu81A8CC

If you want to indulge your curiosity some more:

     $ rg "20189511" /Users/george/tmp/genome/nebula_roshan_NG1AW8W7PU.mm2.sortdup.bqsr.hc.vcf
     3499829:chr13 20189511 rs104894396 C T 252.77 . AC=1;AF=0.500;AN=2;BaseQRankSum=1.54;ClippingRankSum=0.00;DB;DP=25;ExcessHet=3.0103;FS=4.008;MLEAC=1;MLEAF=0.500;MQ=60.00;MQRankSum=0.00;QD=10.11;ReadPosRankSum=0.666;SOR=0.160 GT:AD:DP:GQ:PL 0/1:15,10:25:99:281,0,436
Put that into an LLM or look it up here https://www.snpedia.com/index.php/Rs104894396 to find out which pathogenic mutation I am heterozygous for.

In practice, when my wife and I did carrier screening we didn't do it with Nebula, but carrier screening also confirmed that we had GJB2-related hearing loss genes in common. The embryos of our prospective children were also sequenced so that we could have a child without the condition.

Anyway, if you'd like a test file of a real human to play with, there's mine (from Nebula) for you to take a look at. If you use an LLM you can have some fun looking at this stuff (you can see I'm a man because there are chrY variants in there).

I also used Dante because I wanted to compare the results of their sequencing and variant calling. Unfortunately, they have a different way to tie the sequence back to the user (you take the code they have and keep it safe, nebula has you put the stuff in a labeled container so it's already mapped by them) and I was in a hurry with other stuff. They never responded to me with any assistance on the subject - not even to refuse the request to get the code for that address - so I have no idea how they work.

The nanopore stuff is very cool, but I heard (on Twitter) there were quality control issues with the devices. I'd love to try it some time later just to line it up with my daughter's genome.

zorgster · 2 months ago
oddly enough I just was looking at someone's data with chr13:20189491 A>G (gnomAD genomes v4 AF=0.00941) - also 0/1 genotype.

I used WGS from Nebula - but would like to back that up with Nanopore raw DNA reads targeted on specific genes where I need more accuracy and to investigate structural differences that Illumina or Nebula's MGI machines can't pick up... also for the additional methylation data.

r0ze-at-hn · 2 months ago
Something fun, you have a CYP11B1 rs4541 g;a Wouldn't surprise me if don't like Licorice. You also have something I don't see too often The CYP17A1 −34 T>C, rs743572(A;G) which compounds on that.

Depends on the sum of all the genes in this area of course, but this one mutation is a big influence on the hpa-axis. I would ask if you have lower body weight, heightened anxiety, bad acne as a teenager, episodes of dizziness upon standing, salt cravings, and difficulties with sleep this would be the main driver, pretty standard nonclassic CAH. If you had ever thought you might have "pots", the more accurate would be hypoaldosteronism (but depends on renin genes).

Sense we were poking around here is some highlights

Decent chance of being left handed or ambidexterity given that you also have PCSK6 rs11855415 a;t at the same time (as it can help with the salt issues) and I look for when I see something like the above two.

Vitamin D risk given your GG CYP2R1 (dr probably checks that yearly anyway), risk of lower Mg because of this (cramps, muscle twitches?).

bvitamin wise b9, b12 could be on the lower side given MTRR AA rs1802059 (combined with MTHFR 31 GT 76 CT, MET 30 CG, COMT 99 AG, BHMT rs3733890 G;A). Probably like spinach. If you have TMJ regularly you need to find the right diet or bcomplex for you which will fix this as well as any hypermobility resulting from the collagen production issues. Higher chance of myopia, especially if you are gen Z.

TPH2 rs4570625 g;t jumps out on the serotonin path. Vit d can help here, some might say 5-htp when depressed, but fix the vit d first. Do you like sour gummy candy?

CYP1B1, I see 3 reductions, combined (and the above) I would ask if you have glaucoma in your family history, if so then stuff you can do.

CYP1A1 rs1048943 C;T and really CYP1A2 rs762551 A;A, so fast caffeine and melatonin issues. More insomnia.

CYP2E1, need less acetaminophen to do the same as others.

Intentionally not bothering to go into why, but above average intelligence.

Combine all of the above and decent chance you fall into the bucket of being taller (6'1"?), skinnier, hard time falling asleep and also likes sleeping in, higher libido, left handed, high visual skills, geeky. Possibly synesthesia (a weaker form). Would enjoy a strategy board game over trivial pursuit. Earlier hair loss. Higher risk of one form of Alzheimer's (there is stuff you can do today to reduce it). *Do not smoke*. Didn't dive into all of the ADHD genes, but if mild resolving the above Vit D, b vitamin deficiencies would influence that.

This was with 10 minutes of poking around not a comprehensive look. Mostly I just wanted to add a comment to the general reader that genetics variants are part of larger systems. You would want to do a deeper look, combine it with symptoms as well as lab work to determine the full impact of any change. For example the PCSK6 variant reduces the impact of the CYP11B1 variant. Further you could also easily have something else on the hpa-axis that completely negates the NCAH and never have any salt issues at all. Before spending time looking through each gene I would simply ask, hey do you love to put salt on every meal?

Another one I didn't dig into, but would just ask first is if you have a big sweet tooth. (ncah influenced hypoglycemia).

Feel free to give me a ping and I can walk you though this better.

There is a reason these always end with a disclaimer, talk to your doctor about making changes to your diet, etc, I am not a doctor just someone who learned biology/genetics as a hobby especially given how it can teach tricks to apply to software engineering and my ai/AGI work.

chromatin · 2 months ago
> Intentionally not bothering to go into why, but above average intelligence.

Speaking as a geneticist, it's a shame that this is forbidden knowledge

arjie · 2 months ago
Oh this is marvelous. I'm going to send you an email at the one in your profile, though I won't be upset if you share here. I suppose there's some Barnum Effect risk with this stuff (and of course singular variants don't immediately mean everything as our GCs have pointed out before), so I'll just answer everything as I can here and maybe you and others will find it interesting.

Licorice - no I don't like it

Lower body weight - until 3 years ago, now 84.5 kg / 183 cm

Anxiety - haha, I suppose that's true

Acne - yes

Dizziness upon standing - yes

Salt cravings - yes

Difficulty with sleep / Insomnia - used to be the case, solved in the last few years, strongest in teenage

Pots/Aldosteronism - not that I know of, just tested and sitting 60 bpm, stand up highest is 77 bpm with a continuous monitor on

Vit D - funny, blood tests which I took for the first time two years ago showed 12 ng / mL (low)

Mg - didn't test, but supplements did not change anything when I tried them in isolation so can't be too bad

Spinach - yes

TMJ - no issue here

Myopia - yes with astigmatism (-4.75 spherical -2.5 cylindrical)

Sour gummy candy - not much of a fan

Caffeine/Melatonin - Yes. Caffeine I always get half-caf. Melatonin I take 200-300 ug when I use it.

Acetaminophen - Can't tell, I suppose

Handedness - Right hand dominant, no ambidextrousness

Geeky - Described as so

Synesthesia - probably not, if weak very weak. I used to think I did, but I think that's because when I learned about it as a kid I really wanted to.

Strategy Board Game - you betcha

Hair Loss - Male Pattern Baldness in teenage years haha!

Alzheimer's - how interesting, I am curious

Smoking - Oops, smoked two years in college. Quit hard.

ADHD - I can't imagine this could be likely, but I suppose I had the excitability, impulsiveness, and talking over people things, but it hasn't really caused any real lasting trouble in my life so I can't label it a disorder really. I have previously received a prescription for this condition as an adult, but I did not take the medication for any appreciable amount of time.

Salt - this is very entertaining to my wife, because yes I do often add salt post-cooking to my portion of the meal and frequently complain about undersalting

Sweet Tooth - Yes (heavily dominating my behaviour), however, blood sugar is normal any time I check it 90 - 100 mg / dL . I could wear a continuous monitor and see what it says.

Now, for the intelligence thing. The various Jonathan-Anomaly-related companies these days are definitely trying to move the Overton window on this front. Herasight is the most well known, but I know of a few that are coming up as well. Of course, I'd like to believe this is true, but I suppose the one massive caveat is that (if you run me through peddy, you'll see) I'm South Asian and I know that South Asians have poor presence in most mainstream genomic datasets - a problem I am hoping to either fix or see fixed in my lifetime.

Your standard disclaimer acknowledged.

dash2 · 2 months ago
The obvious question: why are you so relaxed about revealing your whole DNA to the world?
arjie · 2 months ago
Follows from a deeper belief system that the expansion of knowledge is valuable and that humanity can learn even through things (even if they harm me) so long as I am public enough about it. https://wiki.roshangeorge.dev/w/Observation_Dharma
FL33TW00D · 2 months ago
Dante and Nebula have a bad reputation. ySeq has an 8 month wait list. This guys Nanopore sequencer doesn’t work.

It is quite hard to get yourself sequenced in EU in 2025.

zorgster · 2 months ago
tellmeGen have begun to offer it in Spain... 30X for £269 with 10% discount on now with coupon HALLOWEEN10 until Nov 4th.

https://shop.tellmegen.com/en/collections/ultra

ml_basics · 2 months ago
The graph at the beginning showing the cost of sequencing over time falling faster than Moore's law stops in 2015. Would love to see how things have progressed since then. Casually googling i only saw plots up to 2021 but looks to me like progress is now slower than Moore's law since ~2015. Maybe things will change when Nanopore gets more reliable
dust42 · 2 months ago
The graph also only starts in 2001. I worked as a student at the EMBL (European Molecular Biology Laboratory) in the bio-physics instrumentation group in the mid 90ies. The lab group was developing prototypes of thin-film electrophoresis DNA sequencers. Pharmacia Biotech then bought some of the tech and brought it to market. AFAIR at that time it were some of the fastest sequencers but we are talking of low 100s of base pairs per day.
mfld · 2 months ago
The NHGRI updated these plots for years. Sad to see that there is no update since 2022, presumably due to lack of funding.

The sub-$100 genomes could be in reach within the next 5 years, from what I have seen.