I've been studying viruses lately, and have found that the line between virus/exosome/self is much more blurry than I realized. But, given the niche interest in the subject, most articles are not written with an overview in mind.
What sorts of topics make you feel this way?
I had the same "problem" as you. What finally made me feel I sort of cracked it was those videos. The way I think of it now is: They let you do matrix multiplication. The internal state of the computer is the matrix, and the input is a vector, where each element is represented by a qubit. The elements can have any value 0 to 1, but in the output vector of the multiplication, they are collapsed into 0 or 1. You then run it many times to get statistical data on the output to be able to pinpoint the output values more closely.
1) The state of an n-qubit system is a 2^n dimensional vector of length 1. You can assume that all coordinates are real numbers, because going to complex numbers doesn't give more computational power.
2) You can initialize the vector by taking an n-bit string, interpreting it as a number k, and setting the k'th coordinate of the vector to 1 and the rest to 0.
3) You cannot read from the vector, but exactly once (destroying the vector in the process) you can use it to obtain an n-bit string. For all k, the probability of getting a string that encodes k is the square of the k'th coordinate of the vector. Since the vector has length 1, all probabilities sum to 1.
4) Between the write and the read, you can apply certain orthogonal matrices to the vector. Namely, if we interpret the 2^n dimensional space as a tensor product of n 2-dimensional spaces, then we'll count as an O(1) operation any orthogonal matrix that acts nontrivially on only O(1) of those spaces, and identity on the rest. (This is analogous to classical operations that act nontrivially on only a few bits, and identity on the rest.)
The computational power comes from the huge size of matrices described in (4). For example, if a matrix acts nontrivially on one space in the tensor product and as identity on nine others, then mathematically it's a 1024x1024 matrix consisting of 512 identical 2x2 blocks - but physically it's a simple device acting on one qubit in constant time and not even touching the other nine.
It's by Andy Matuschak and Michael Nielsen, and it is excellent. Have fun!
If you understand Turing Machines, you probably also understand other automata. So you probably understand nondeterministic automata [1].
A quantum computer is like a very restricted nondeterministic automaton, except that the "do several things a once" is implemented in physics. That means just like a NFA can be exponential faster than a DFA, a QC can be exponential faster than a normal computer. But the restriction on QCs makes that a lot harder to do, and so far it only works for some algorithms.
As to why quantum physics allows some kind of nondeterminism: If you look at particles as waves, instead of a single location you get a probability function that tells you "where the particle is". So a particule can be "in several places at once". In the same way a qbit can have "several states at once".
> What I don't understand is how a programmer is supposed to determine the correct solution when their computer is out in some crazy multiverse.
Because one way to explain quantum physics is to say that the waveform can "collapse" [2] and produce a single result, as least as far as the observers are concerned. There are other interpretations of this effect, and this effect is what makes quantum physics counterintuitive and hard to understand.
[1] https://en.wikipedia.org/wiki/Nondeterministic_finite_automa...
[2] https://en.wikipedia.org/wiki/Wave_function_collapse
However, even with understanding how a Quantum Computer works at its most basic level I still have difficulty understanding the more useful Quantum Algorithms:
https://en.wikipedia.org/wiki/Shor%27s_algorithm
https://en.wikipedia.org/wiki/Grover%27s_algorithm
Deleted Comment
It explains in terms a computer scientist can understand. As in: it sets out a computational model and explores it, regardless whether we can physically realize that machine.
Hope this helps!
I've found that to be the clearest way of understanding what qcs do.
I've often heard it said that Quantum Computers can crack cryptographic keys by trying all the possible inputs for a hashing algorithm or something handwavey like that. Are they just spitting out "probable" solutions then? Do you still have to try a handful of the solutions manually just to see which one works?
Quantum computing is all about finding ways to hack the interference process to compute more than you otherwise would have.
https://metacpan.org/pod/Quantum::Superpositions
As far as I can tell this one still outperforms all existing "hardware implementations".
I say this as someone who passed 2 semesters of graduate QM.
THat's funny because my EE math concentration was on advanced calculus. I took two semesters of a-calc and got A's, but I only know how to compute a Jacobian and apply it, not its origin story. It's a very weird feeling to understand the motions but not the ... depth?
The basic idea is that by making the amplitudes of the qubits destructively interfere with each other in certain ways, you can eliminate all of the wrong answers to the question you're trying to answer.
But it could have some bleeding edge new applications from the TCP/IP space for urgent point, new methods for cryptography, or speeding up algorithms for searching. ¯\_(ツ)_/¯
Generally quantum computers are good for three things
* factoring numbers (and other highly related order-finding problems). RIP RSA, but not that applicable outside of crypto.
* unstructured search (brute forcing a problem in only O(sqrt(n)) gueses instead of an average of n/2 gueses). Certainly useful...but its not a big enough speedup to be earth shattering.
* simulating various quantum systems (so scientists can do experiments easier). Probably by far the most useful/practical application in the near/medium term.
There's not a whole lot else they are good for (that we know of, yet)
The basic gist I get is that quantum computing, for a very specific set of problems, like optimization, let's you search the space more efficiently. With quantum mechanics you can associate computations with positive or negative probability amplitudes. With the right design, you cause multiple paths to incorrect answers to have opposite amplitudes, so that interference causes them to cancel out and not actually happen to begin with. That's just my reading of the comic over and over though.
Proof? Just look at all the replies you got: each one is dozens of pages of complex (imaginary) math, control theory, and statistics.
The hardest part of QC is exactly what you described: how to extract the answer. There is no algorithm, per se. You build the system to solve the problem.
This is why QC is not a general purpose strategy: a quantum computer won't run Ubuntu, but it will be one superfast prime factoring coprocessor, for example (or pathfinder, or root solver). You literally have to build an entire machine to solve just one problem, like factoring.
Look at Shor's algorithm: it has a classical algorithm and then a QC "coprocessor" part (think of that like an FPU looking up a transcendental from a ROM: it appears the FPU is computing sin(), but it is not, it is doing a lookup... just an analogy). The entire QC side is custom built just to do this one task:
https://en.wikipedia.org/wiki/Shor%27s_algorithm
In this example he factors 15 into 5x3, and the QC part requires FFTs and Tensor math. Oy!
Like I said, it will take decades for this to become easier to explain.
For fun, look at the gates we're dealing with, like "square root of not": https://en.wikipedia.org/wiki/Quantum_logic_gate
Dead Comment
Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..) and why not others (e.g. stocks or put money into startups)? And then, what happens if a debtor pays back its debt? Is that money consequently getting "erased" again (just like it's been created)? What happens if a debtor defaults on its debt? Does that money then just stay in the economy, impossible to drain out? What is the general expectation of the central banks? What percentage of the debt is expected to default and how much is expected to be paid back?
And specifically in the case of central banks buying govt. debt: Are central banks considered "easier" creditors than the public? What would happen if a country defaults on a loan given by a central bank? Would the central bank then go ahead and seize and liquidate assets of the country under a bankruptcy procedure to pay off the debt (like it would be standard procedure for individuals and companies)?
The liquidity injected is supposed to be taken out later, thus removing the inflationary distortion. Whether it will or not is anyone's guess. 2008's injections have yet to be taken out.
Central banks are easier creditors because, while autonomous, they are the same country as the government! So it's technically like owing yourself money. A central bank that cooperates with the debtor country (itself) would never force a default, and is thus never an acute problem. Of course, infinite money printing should lead to dangerous inflation.
The bad news is that by ignoring inequality, they may be just causing it.
That doesn't sound surprising when all that injected money goes directly to banks instead of individuals.
If you or a friend wants a crash course on econ, check it out.
If Central Banks can create money without negative effects, then
- why tax people?
- why even work? Can't we just print enough money for everyone and live happily ever after?
I realize these questions are quite provocative and their answering only explains if it will work but not how or when it will fail.
Printing money is actually more or less equivalent to a tax, because it reduces the value of the existing money supply.
> - Can't we just print enough money for everyone and live happily ever after?
No, because printing money redistributes wealth, it doesn't create it.
The main purpose of a central bank imo is to keep money creation at arms length from government, so a rubbish government can't fiddle with the financial system too much.
So that common services (eg. healthcare in Canada, education everywhere, roads) can be collectively paid for.
- why even work? Can't we just print enough money for everyone and live happily ever after?
Because there wouldn't be enough resources. (Fiat) currency is just a medium of exchange for real stuff. Instead of growing wheat and trading it for your wooden furniture, the state provides a medium of exchange so we can both just transact in money. eg. when I don't need more furniture but you still need wheat.
Money =/= wealth
I would say that some mathematicians went into economics to win Nobel prizes (which they did win) and I guess they would probably be quick to point this out at well.
A better but harder resource is I believe by Piketty (the inequality guy): from balance sheet recessions there's usually only a few ways out. He and his co-authors go through every single knkwn recession in every single country (obviously biased towards recent and Western). What I took away from it is that without population growth the US needs hyperinflation, to default on its debt, or to increase tax revenue's sharply and/or decrease entitlements sharpy. It's up to you to guess which one of those three is most likely.
But the budgetary situation is not tenable.
- Assets that they are buy? The idea is to keep the banking system solvent and to prevent a domino effect where the liquidation of one big bank will result in a run on other banks. The big banks got into trouble because they took depositors money and invested in junk which went belly-up. The federal government insures everyone's bank deposits; if the banks went bellied up the FDIC would have to pay out. Better that the banks stay solvent.
- There are cases like Greece defaulting on its international loans. The EU forced Greece to agree to an austerity plan lowering Greece's payments if Greece changed its national spending which is deeply unpopular in EU and Greece. But there is no other alternative. Well the alternative is what happened to Weimar Germany after WWI: hyperinflation, economic destruction, and the longing for a savior.
So the aim is to take advantage of this transient fluctuation and the way the ripple propagates.
But even if in a perfect (or future) world, everybody reprices instantly all the goods relatively to the new amount of available currency, there is still an effect which is how is distributed the newly "created" currency: who get the new shiny coins? So this is equivalent (if the repricing is done) to a kind of global instantaneous tax-and-subsidies. They tax everybody by the percentage of currency created (relative to the total existing amount), and the lucky ones receiving the fresh money are getting thus a subsidy.
I'd love to see this properly explained, because it definitely has a counter intuitive ring to me.
Now our Bank loans out 50,000 of those hackerbucks to Customer B. It does this by crediting her account with 50,000 hackerbucks, but notice that Customer A still has 1 million in his account - so now there's 1,050,000 hackerbucks in apparent existence - we've created 50,000 hackerbucks from thin air. If Customer B withdraws the loan money to go spend it, the Bank will have 950,000 in reserves and an asset worth 50,000 (the loan). Customer B will have 50,000 in cash.
What we've actually done is increase the "M2", one of the measures of how much money is in the economy.
If Customer B either repays the loan or defaults on it, that new money disappears. In the loan repayment case, the Bank goes back to having 1,000,000 in reserves, and in the loan default case the loan asset becomes worthless and it is left with only 950,000 in reserves (the other 50,000 is out there with wherever Customer B spent it).
> Macroeconomics. Central banks are "creating" a trillion here, a trillion there, like nobody's business. What is the thought process that central bankers have gone through to make these decisions?
The general consensus is that central banks should stay passive and keep prices stable. However, in periods of crisis, like the one we live in, the central bank could support economy. In ordinary times, creating trillions would lead to inflation. But here the idea is more to save the economy in the short term because it's always cheaper than reparing it. Central bankers agreed to create trillions such that banks do not go bankrupt like they did in 1929. By creating trillions, they also keep interest rates low for government such that they can still borrow.
> But what are the consequences?
Some inflation. Another consequence is that investors will invest in riskier assets afterward to keep their profitability target. (Again, because lending trillions will lower the interest rates)
> Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..)?
They usually buy low risk, higly liquid assets. Putting trillions in startups is infinitely infinitely complicated for a central bank because it implies high monitoring costs, and it also takes a lot of time to create those kind of contracts. Remind that the goal is to provide lot of liquidity to the economy as fast as possible. There is also an academic debate about giving money directly to the general public (known as "helicopter money"), but with little attention from central bankers.
> And then, what happens if a debtor pays back its debt? Is that money consequently getting "erased" again ?
Yep, pretty much... Appart from fiat money, money is constantly created and destroyed. It is mostly created by private banks when they grant loans. And it is destroyed when you repay it. Of course they cannot do whatever they like and create at will, but remember that "Deposits DO NOT make the credits (but in some way it defines how much you could create)"
> What percentage of the debt is expected to default and how much is expected to be paid back? What would happen if a country defaults on a loan given by a central bank?
Central banks buy bonds, and bonds are pretty much always paid back. And if not, the central bank will not suffer much. Cases of countries not reimbursing are very scarce and exceptional (I can just think of Argentina). Anyway, a country CANNOT go bankrupt like a person. And in general, comparing countries with individuals or companies is not a good idea. Countries are here pretty much forever (in a financial sense), you don't. Countries can waive taxes, individuals cannot.
>Yep, pretty much...
I would add that if the system is fractional reserve then it increases the proportion of the bank's reserve allowing more money to be created. So while it's technically true that it's destroyed you could see the next loan as its reincarnation, no..?
I didn't go here in my response above because my vague understanding is that we're not strictly a fractional reserve system any more, though I don't understand how.
I think eventually it will have bigger consequences, but it will take some time for these trillions to filter on down.
I remember once outcome of the 2008 crisis was that consumer goods like cereal boxes stayed the same size, but the bag inside the box got smaller.
> Macroeconomics. Central banks are "creating" a trillion here, a trillion there, like nobody's business. But what are the consequences?
Nobody quite knows; it's still hotly contested between left wing lovers of Keynes and right wing believers in austerity.
> What is the thought process that central bankers have gone through to make these decisions?
Probably largely a political one. Central banks may be trying to fulfill a remit set by law (e.g. bank of england: keep inflation below x%) and are trying to deliver on that. (why? too much or too little inflation both cause problems, I guess we somehow reached consensus on a "sane" amount that keeps pace with genuine growth of wealth within the economy)
> Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..) and why not others (e.g. stocks or put money into startups)?
I think this is about distributed decision making. The central bank does not have the expertise to decide which stocks or startups represent the best investments. The examples here involve lending money to government, presumably the idea being the latter is better placed to decide what to do with the money. Another example is buying assets from other banks, which are again better placed to decide which businesses/homeowners/etc represent a more sound investment as they do it on a daily basis (from a profit/loss point of view ... of course we debate whether or not that's the case on a societal level).
> What would happen if a country defaults on a loan given by a central bank?
Internally it would depend on laws/balance of political power within the country. Between countries, depending on the currency the country could do crazy stuff like print excessive amounts of money to repay the loan (Germany did this in the 1930s leading to hyperinflation) or they could just as you say, default. The country's credit rating would then be downgraded, making it harder for them to raise credit in future.
> Would the central bank then go ahead and seize and liquidate assets of the country under a bankruptcy procedure to pay off the debt (like it would be standard procedure for individuals and companies)?
Not the bank, but the country making the loan, may first negotiate some debt relief with strings attached e.g. preferential trade agreements. Beyond that, I have no idea what precedent exists.
A lot of these things are kind on unknowable because they depend on future human behaviour in ways you can't really predict. A lot of George Soros's theory of reflexivity is along those lines. People think they are calculating on the basis of fundamentals but the things that look like fundamentals are actually functions of human behaviour so the system is inherently unstable. He's made a few bob from that.
[0] https://youtu.be/5VefaI0LrgE
Everything is being jostled around randomly. The molecules don't have brains or seeker warheads. They can't "decide" to home in on a target.
The only mechanisms for guidance are: diffusion due to concentration gradients, movement of charged molecules due to electric fields, and molecules actually grabbing other molecules.
It's all probabilities. This conformation makes it more likely that this thing will stick to this other thing. You may have heard that genes can be turned on or off. How? DNA is literally wound on molecular spools in your cell nuclei. When the DNA is loosely wound other molecules can bump into it and transcribe it -- the gene is ON. When the DNA is tightly spooled, other molecules can't get in there and the gene is OFF for transcription. There's no binary switch, just likelihoods.
Everything is probabilistic, but the probabilities have been tuned by evolution through natural selection to deliver a system that works well enough.
A tRNA molecule at body temperature travels at roughly 10 m/s. Assuming a point-sized tRNA and stationary ribosome of radius 125 * 10^-10 m, the ray casted by the moving tRNA will collide with the ribosome when their centers are within 125 * 10^-10 m of each other. The path of the tRNA sweeps a "collidable" circle of the radius of 125 * 10^-10 m, for a cross-sectional area of 5 * 10^-16 m^2. Multiplied by the tRNA velocity, the tRNA sweeps a volume of 5 * 10^-15 m^3 per second. Constrained inside an ordinary animal cell of volume 10^-15 m^3, the tRNA would have swept the entire volume of the cell five times over in a single second. Obviously the collision path would have significant self-overlap, but at this rate it's quite likely for the two to collide at least once any given second.
Now, consider that this analysis was only for a single ribosome/tRNA pair. A single ribosome will experience this collision rate multiplied by the total number of tRNA in the cell, on the order of thousands to millions. If a ribosome is bombarded by tens of thousands of tRNA in a single second, it's very likely one of those tRNA will (1) be charged with an amino acid, (2) be the correct tRNA for the current 3-nucleotide sequence, and (3) collide specifically with the binding site on the ribosome in the correct orientation. In actuality, a ribosome synthesizes a protein at a rate of ~10 amino acid residues per second.
Any given molecule in the cell will experience millions to billions of collisions per second. The fact that molecules move so fast relative to their size is what allows these reactions to happen on reasonable timescales.
I know 4 billion years is a long time and the earth has a lot of matter rattling on it at any given time, but if every atom in the universe was a computer cranking out a trillion characters per second, you'd only have a 1 in a quarter quadrillion chance of making it to 'a new nation' in the first sentence of the Gettysburg address. Seeing the complexity in even the most trivial biological system just makes me scratch my head and wonder how its possible at all.
I'm not invoking God here. I just see a huge gulf in complexity that is difficult for me to traverse mentally.
Once they are in close enough proximity to bump into each other, intermolecular forces can come into play to get the "docking process" done.
For something like transcription, once they are "docked", think of it like a molecular machine - the process by which the polymerase moves down the strands is non-random.
There are also several ways to move things around in a more coordinated fashion. Often you have gradients of ion concentration, and molecules that want to move a certain direction within that gradient. You also have microtubules and molecular machinery that moves along them to ferry things to where they need to be. You can also just ensure a high concentration of some molecule in a specific place by building it there.
http://www.righto.com/2011/07/cells-are-very-fast-and-crowde...
But in a nutshell, the animations are heavily idealized, showing the process when it succeeds, slowing it way, way down, and totally ignoring 90% of the other nearby material so you can see what's going on. Then you remember that you have just a bajillion of cells within you, all containing this incredibly complex machinery and... it's really kindof humbling just how little we actually know about any of it. Not to discredit the biologists and scientists for whom this is their life's work; we've made incredible amounts of progress over the last century. It's just... we're peeking at molecular machinery that is so very small, and moves so quickly that it's nigh impossible to observe in realtime.
1) Compartmentalizing of biological functions. Its why a cell is a fundamental unit of life, and why organelles enable more complex life. Things are physically in closer proximity and in higher concentrations where needed.
2) Multienzyme complexes. Multiple reactions in a pathway have their catalysts physically colocated to allow efficient passing of intermediate compounds from one step to the next.
https://www.tuscany-diet.net/2019/08/16/multienzyme-complexe...
3) Random chance. Stuff jiggles around and bumps into other stuff. Up until a point, higher temperature mean more bumping around meaning these reactions happen faster, and the more opportunities you can have for these components fly together in the right orientation, the more life stuff can happen more quicky. There's a reason the bread dough that apparently everyone is making now will rise faster after yeast is added if the dough is left at room temp versus allowed to do a cold rinse in the fridge. There are just less opportunities for things to fly together the right way at a lower temperature.
3a) For the ultra complex protein binding to the DNA, how those often work in reality is that they bind sort of randomly and scan along the dna for a bit until they find what they're looking or fall off. Other proteins sometimes interact with other proteins that are bound to the DNA first which act as recruiters telling the protein where to land.
My favorite illustration was a video of simulated icosahedral viral capsid assembly. The triangular panels were tethered together to keep them slamming into each other. Even then, the randomness and struggle was visceral. Lots of hopeless slamming; tragic almost but failing to catch; being smashed apart again; misassembling. It was clear that without the tethers forcing proximity, there'd be no chance of successful assembly.
Nice video... it's on someone's disk somewhere, but seemingly not on the web. The usual. :/
> yeast
Nice example. For a temperature/jiggle story, I usually pair refrigerating food to slow the bacterial jiggle of life, with heating food to jiggle apart their protein origami string machines of life. With video like https://www.youtube.com/watch?v=k4qVs9cNF24 .
> Compartmentalizing
I've been told the upcoming new edition of "Physical Biology of the Cell" will have better coverage of compartmentalization. So there's at least some hope for near-term increasing emphasis in introductory content.
The amount of complexity is just absolutely insane. My favourite example: DNA is read in triplets. So, for example, "CAG" adds one Glutamine to the protein it's building[1].
There are bacteria that have optimised their DNA in such a way that you can start at a one-letter offset, and it encodes a second, completely different, but still functional protein.
I found the single cell to be the most interesting subject. But of course it's a wild ride from top to bottom. The distance from brain to leg is too long, for example, to accurately control motion from "central command". That's why you have rhythm generators in your spine that are modulated from up high (and also by feedback).
Every human sensory organ activates logarithmically: Your eye works with sunlight (half a billion photons/sec) but can detect a single photon. If you manage to build a light sensor with those specs, you'll get a Nobel Prize and probably half of Apple...
[0]: https://amzn.to/2zzDt8P
[1]: https://en.wikipedia.org/wiki/DNA_codon_table
As a dancer, I have been fascinated by that fact. It means that dancers do not dance to the beat as they hear it - it takes too much time for the sound to be transformed by the ear/brain into an electrical pulse that reaches your leg. Instead, all dancers have a mental model of the music they dance to that is learnt by practice/repetition.
Dancing is just syncronizing that mental model to the actual rhythm that is heard. When I explained that to a bellydancer friend she finally understood the switch that she had made from being a beginning dancer to an experienced dancer who 'dances in their head'
Note that the 4th edition is (sortof) freely available at the NIH website. The way to navigate through that book is bizarre though, as the only way to access its content is by searching.
https://www.ncbi.nlm.nih.gov/books/NBK21054/
PS: Speed of sound is 343 m/s, diameter of a cell nucleus is ~ 0.000006m to give an idea.
1. These molecules are moving around a lot. The kinetic energy of molecules at room or body temperature gives them impressive velocity relative to their scale, and they're also rotating altogether and internally.
2. Compatible molecules are like magnetic keys and locks. They attract each other and the forces align with meeting points. The same way that proteins fold spontaneously.
So the remaining part is getting concentrations appropriate for what you want to happen - and that's a matter of signaling molecules and "automatic" cell responses to changes in equilibrium. It's a really chaotic system and it's a wonder it works at all.
I imagine that's also one reason life is imprecise, i.e. no two individuals are alike even with identical genes. There's a lot of extra "entropy" introduced by that mess of a soup.
To wit, the idea is that you cannot distinguish whether you are in an accelerated frame or in a gravitational field; alternatively stated, if you’re floating around in an elevator you don’t know whether you’re freefalling to your doom or in deep sideral space far from any gravitational source (though of course, since you’re in an elevator car and apparently freefalling... I think we’d all agree on what’s most likely, but I digress).
Anyway, what irks me that this is most definitely not true at the “thought experiment” level of theoretical thinking: if you had two baseballs with you in that freefalling lift, you could suspend them in front of you. If you were in deep space, they’d stay equidistant; if you were freefalling down a shaft, you’d see them move closer because of tidal effects dictated by the fact that they’re each falling towards the earth’s centre of gravity, and therefore at (very slightly) different angles.
Of course, they’d be moving slightly toward each other in both cases (because they attract gravitationally) but the tidal effect presents is additional and present in only one scenario, allowing one to (theoretically) distinguish, apparently violating the bedrock Equivalence Principle.
I never see this point raised anywhere and I find it quite distressing, because I’m sure there’s a very simple explanation and that General Relativity is sound under such trivial constructions, but I haven’t been able to find a decent explanation.
The first part of the argument is that for single point particles falling, the effect of gravity is the same for all particles. This suggests that we should model gravity as something intrinsic to spacetime itself, rather than as a field living on top of spacetime, which could couple to different particles with different strengths.
The second part of the argument, which is what you point out, is that gravity can have nontrivial tidal effects. (This had better be true, because if all gravitational effects were just equivalent to a trivial uniform acceleration, then it would be so boring that we wouldn't need a theory of gravity at all!) This suggests that whatever property of spacetime we use to model gravity, it should reduce in the Newtonian limit to something that looks like a tidal effect, i.e. a gradient of the Newtonian gravitational field. That leads directly to the idea of describing gravity as the curvature of spacetime.
So both parts of the argument give important information (both historically and pedagogically). Both parts are typically presented in good courses, but only the first half makes it to the popular explanations, probably out of simplification.
Can you please explain to me how you went from"looks like a tidal effect in the Newtonian limit" to "a gradient of the Newtonian Graviational field"?
The real principle of relativity is a bit more subtle (sometimes called the strong principle): that the effects of gravity can be explained entirely at the level of local geometry, without any need for non-local interaction from the distant body that is generating the gravitational field. To describe the geometry of non-uniform fields, we need more sophisticated mathematical machinery than what is implied by the elevator car thought experiment, but nonetheless, the elevator example is a useful launching point for that type of inquiry.
Clearly it will fail given a big enough lift to experiment in, since a big enough lift would essentially include whatever object is creating that gravitational pull (or enough to conclude its existence from other phenomena). However these effects are nonlocal, you need two different points of reference for them to work (like your two baseballs). In fact most Tidal forces are almost by definition nonlocal.
The precise definition involves describing curved spacetime and geodesics, but that one is really hard to visualize as a thought experiment. The thought experiment does offer insight though, as it is possible to imagine that, absent significant local variations in gravity, you cannot distinguish between free-fall and a (classical) inertial frame of reference without gravity. This insight provides the missing link that allows you to combine gravity with the laws of special relativity and therefore electromechanics, including the way light bends around heavy objects, which provided one of the first confirmations of this theory.
This point isn't raised anywhere because it's mostly a pedantic point that has nothing to do with the thought experiment. You shouldn't try and decompose thought experiments literally, otherwise you'll get caught up in unimportant details like this. Just assume the elevator is close enough to the earth such that the field lines are effectively parallel, or better yet, just pretend the elevator is in an infinite plate field.
If you think it's sneaky to "implicitly" assume they're in the same direction, I would point out that this is no different from assuming they have the same magnitude. It would be kinda dumb to say "well this 1m/s^2 acceleration can't possibly be equivalent to gravity because gravity is 9.8m/s^2, so the statement is obviously wrong and they're trying to trick me!!"... same thing for direction.
The force that would be exerted from acceleration versus gravity is different. The force you we think of as gravity comes from a center point that changes with your position while acceleration comes from a uniform direction without regard to your position.
I had to apologize and say that the explanation was over simplified and really it would work, say, only for some creatures living exactly on the floor of the elevator.
One of the two, at a challenging high school, made Valedictorian (surprise to her parents who didn't know she had long been first in her class) then in college PBK, got her law degree at Harvard, started at Cravath-Swain, went for an MD, and now is practicing medicine. Bright niece.
Dead Comment
What sets science apart from most other methods of seeking answers is its focus on disproof. Your goal as a scientist is to devise experiments that can disprove a claim about the natural world.
This misconception rears its head most prominently in discussions at the intersection between science and public policy. Climate change. How to handle a pandemic. Evolution. Abortion. But I've even talked to scientists themselves who from time to time get confused about what science can and can't do.
The problem with believing that science proves things is that it blinds its adherents to new evidence paving the way to better explanations. It also leads to the absurd conclusion that a scientific question can ever really be "settled."
Science also doesn't seek disproof. It uses both example and counter example to confirm or deny or increase how much one confirms or denies.
It is simply wrong to think that scientific questions can never be definitively settled. Clearly there are some hypotheses that have been difficult (and may be impossible) to prove, for example, Darwin's idea that natural selection is the basis of evolution. There's ample correlative evidence in support of natural selection, but little of the causal data necessary for "proof" (until perhaps recently). In the case of evolution the experiments required to prove that natural selection could lead to systematic genetic change were technically challenging for a variety of reasons.
In the case of climate change, the problem again is that the evidence is correlative and not causal. Demonstrating a causal link between human behavior or CO2 levels and climate change (the gold standard for "proof") is technically challenging, so we are forced to rely on correlations, which is the next best thing. But, you are right, it is not "proof".
Establishing causality can be difficult but not impossible - the standard is "necessary and sufficient". You must show necessity: CO2 increase (for example) is necessary for global warming; if CO2 remains constant, no matter what else happens to the system global temperatures remain constant. And you must also demonstrate sufficiency: temperatures will increase if you increase CO2 while holding everything else constant. Those are experiments that can't be done. As a result, we are forced to rely on correlation - the historical correlation between CO2 and temperature change is compelling evidence that CO2 increases cause global warming, but it is not proof. It then becomes a statistical argument, giving room for some to argue the question remains "unsettled".
My point is that there are plenty of examples in science where things have been proven -- DNA carries genetic information, DNA (usually) has a double stranded helical structure, V=IR, F=Ma, etc. And there are things that are highly likely, but not "proven", e.g., human activity causes of climate change.
While some of the issues you bring are remain unproven, what's really absurd is to think that no scientific questions can be settled.
This is not mutually exclusive with being against the attacks on science. Just because we shouldn't treat things as proven doesn't mean we can't come to a general consensus on a topic and act as if it was true. Climate change is real. Evolution is real. Don't inject yourself with bleach. Having a small number of quacks say 'its just a hypothesis and actually god is responsible for climate change and evolution' without any evidence doesn't change the general consensus and doesn't mean we have stop everything until we prove the negative.
Ultimate I think most of us agree in principle. Most of what we're discussing here is minor semantic differences in vocabulary.
That said, it is indeed annoying when people who don't understand science interpret "open for disproof" to mean "it's easy to disprove." Quantum mechanics and the second law of thermodynamics could in principle be disproven, but the evidentiary burden would be extremely high. (Insert obligatory Carl Sagan quote here.)
No. What is the basis for these claims?
They're both wrong.
It's not true that CO2 increase is necessary for global warming. If the sun got a lot hotter, global temperatures would rise. If non-CO2 GHGs increased, global temperatures would rise. If the overall albedo of the planet changes, global temperatures can rise. There are literally thousands of things that could cause the temperature to rise.
It's also not true that CO2 increase, holding everything else constant, would lead to long term or even medium term warning. We have no idea what the ecosystem will do for any given change in CO2 levels, since there are countless species both who are net producers and net consumers of atmospheric CO2, all of whom have exponential growth and feedback loops.
Even still, even since both of those claims are wrong, CO2 increase may still cause global warming.
Furthermore, the things you claim are proven, are not proven, they are true by definition. All molecules carry information, and the fact that DNA carries genetic information is a direct consequence of the fact that it is DNA. V=IR by definition. F=ma by definition. There's no such thing as a "force" or "mass" or "acceleration" entity per se, these are metrics that are by definition equal in a given physical framework.
There is no way to 'technically' prove anything in science, and the reasons are simple:
(1) The past is gone - you can't access it
(2) You can't see the future
(3) Your knowledge of the present is extremely limited and inaccurate
These are the limitations of the real world, and science does its best to provide utility within that. It only focuses on making future predictions using the observed past as evidence, because you only can do that. You can't check your model in the present, because you can't instantaneously observe anywhere you aren't already observing. Checking your model on the past relies on what you think happened, i.e. what allegedly happened, but there is absolutely no way to truly know.
You can't even really prove anything 'novel' in mathematics, which is the only place where you can actually prove anything, but even there all proofs are effectively just framing something that was already implied axiomatically in a way that allows our limited human minds to see the relevant/useful patterns that aren't immediately obvious to us.
My point is, acting as though you can truly prove anything in science,
> what's really absurd is to think that no scientific questions can be settled
is not only wrong, but in my opinion is a distraction from what science is actually for. It's not about settling questions. Science is never settled, and that's part of what's beautiful about it. It's about reducing our own ignorance and proving our past selves wrong, discovering patterns and models that equip us with the knowledge to build a better world for ourselves and the rest of humanity.
Why lie about being a great soccer player when you're already great at basketball? Let's focus on the beauty of science as a great journey of growth and exploration that accelerates the progress of humanity, instead of trying to make it do something that isn't possible in the real world.
It sounds to me like the grandparent is 100% correct.
> It is simply wrong to think that scientific questions can never be definitively settled.
They made no such claim, speaking of intuition.
> Clearly there are some hypotheses that have been difficult (and may be impossible) to prove, for example, Darwin's idea that natural selection is the basis of evolution
I've seen very little evidence in online discussions (Reddit for example) among armchair scientists that the theory of evolution is anything short of cold, hard, scientific fact.
> In the case of climate change, the problem again is that the evidence is correlative and not causal. Demonstrating a causal link between human behavior or CO2 levels and climate change (the gold standard for "proof") is technically challenging, so we are forced to rely on correlations, which is the next best thing. But, you are right, it is not "proof".
Is this (it is not proven) the message they're sending when they say things like "The science is in", just as one example?
> Establishing causality can be difficult but not impossible - the standard is "necessary and sufficient". You must show necessity: CO2 increase (for example) is necessary for global warming; if CO2 remains constant, no matter what else happens to the system global temperatures remain constant. And you must also demonstrate sufficiency: temperatures will increase if you increase CO2 while holding everything else constant. Those are experiments that can't be done. As a result, we are forced to rely on correlation - the historical correlation between CO2 and temperature change is compelling evidence that CO2 increases cause global warming, but it is not proof. It then becomes a statistical argument, giving room for some to argue the question remains "unsettled".
This is not the message I've heard, at all, from any mainstream news source, and it's certainly not the understanding of 95% of "right minded" people I've ever encountered.
> While some of the issues you bring are remain unproven, what's really absurd is to think that no scientific questions can be settled.
What's even more absurd, to me, is how you managed to find a way to interpret his text in that manner. And you're obviously (based on what you've written here), a genuinely intelligent person. Now, imagine how the average person consumes and processes the endless stream of almost pure propaganda, from both "sides" on this topic and many others.
The unnecessarily dishonest manner in which the government and media have chosen to represent (frame) reality to the general public has left an absolutely massive number of easily exploitable attack vectors for "conspiracy theorists" to exploit. And if you are of the opinion that all conspiracy theorists are idiots so you have nothing to worry about, consider the possibility that this too has been similarly misrepresented to you.
If a society chooses to largely abandon things like logic and epistemology in the education of its citizens, thinking propaganda is a suitable replacement, don't be surprised when things don't work out in your favor. If we can barely manage such things here, why should we expect Joe and Jane six-pack to somehow pull it off?
Any hypothesis that I invent at this very moment, is from this perspective in the best position a hypothesis can ever be. There is no disproof. There is even no coherent argument against it, because I literally just made it up this second, so no one had enough time to think about it and notice even the obvious flaws. This is the best moment for a hypothesis... and it can only get worse.
I understand that there is always a chance that the new hypothesis could be correct. Whether for good reasons, or even completely accidentally. (Thousand monkeys with typewriters could come up with the correct Theory of Everything.) Yes, it is possible. But...
Imagine that there are two competing hypotheses, let's call them H1 and H2.
Hypothesis H1 was, hundred years ago, just one of many competing options. But when experiment after experiment was done, the competing hypotheses were disproved, and only this one remained. For the following few decades, new experiments were designed specifically with the goal of finding a flaw in H1, but the experimental results were always as H1 has predicted them.
Hypothesis H2 is something I just made up at this very moment. There was not enough time for anyone to even consider it.
A strawman zealot of simplified Popperism could argue that a true scientist should see H1 and H2 as perfectly equal. Neither was disproved yet; and that is all that a true scientist is allowed to say. Maybe later, if one of them is disproved in a proper scientific experiment, the scientist is allowed to praise the remaining one as the only one that wasn't disproved yet. To express any other opinion would be a mockery of science.
Of course, there always is a microscopic chance that H1 might get disproved tomorrow, and that H2 might resist the attempts at falsification. But until that happens, treating both hypotheses as equal is definitely NOT how actual science works. And it is good that it does not.
In actual science, there is something positive you are allowed to say about H1. Something that would make the strawman zealot of simplified Popperism (e.g. an average teenager debating philosophy of science online) scream about "no proof ever, only disproof". The H1 is definitely not an absolute certainty. But there is something admirable about having faced many attempts at falsification, and surviving them.
But, I wonder if you can describe H1 as being a stronger hypothesis than H2 by virtue of withstanding more and higher quality attempts to disprove it?
I should add: As a human being, it is probably impossible to separate the scientist from the philosophy in which they explore, proceed with, and promote their work. In some cases, it might not be something they are even aware of. Instead, the scientific system (as a sort of world institution) should itself be designed to always seek out and protect truth, regardless of prevailing contemporary knowledge.
[0] https://en.wikipedia.org/wiki/Philosophy_of_science
If this claim was true, it would disallow science to make true claims, because no experiments can disprove such claims. Truth is a delicate matter and can't be handled by simple methods. Questions may not be settled, but they can be difficult to challenge.
Isn't that exactly how science work? It does not make true claims. It produces statements with disclaimers. If this and this then Y is true, as long as we don't observe Y.
You cannot use the scientific method to definitely say: "X is true".
Dead Comment
Deleted Comment
When I took physics they basically said "at first scientists were disturbed by the fact that magnets imply that two objects are interacting without any physical contact, but then Faraday came along and said 'the magnets are actually connected by invisible magnetic field lines' and that resolved everything."
How does saying "but what if there's invisible lines connecting them" resolve anything? To be clear, I'm not objecting to any of the actual electromagnetic laws or using field lines to visualize magnetic fields. It's just that I don't get how invoking invisible lines actually explains anything about how objects are able to react without physical contact.
(Also, it is not lost on me I that this question boils down to "fraking magnets, how do they work?")
The reason some people regard Faraday's original explanation of the eponymous law (it is worth noting that at the time it was widely regarded as inadequate and handwavy) as illuminating is because Faraday visualized his "lines of force" as literal chains of polarized particles in a dielectric medium, thereby providing a seemingly mechanistic local explanation of the observed phenomena. Not much of this mindset survived Maxwell's theoretical program and it has very little to do with how we regard magnetism today. Instead, the unification of electricity and magnetism naturally arises from special relativity, whereas the microscopic basis of magnetism requires quantum mechanics. There isn't really any place for naive contact mechanics in the modern picture of physics, so in that sense I would regard Faraday's view as misleading.
Finally, I can't end any "explanation" of magnetism without linking the famous Feynman interview snippet [1] where he's specifically asked about magnetism. It doesn't answer your question directly, but it's worth watching all the more because of it.
[1] https://www.youtube.com/watch?v=MO0r930Sn_8
So, if we don't have the notion of fields, then we have a kind of situation of how does object A know about remote object B. Like how does one object know about the motions of literally every other object in the Universe. Perplexing.
Once you come up with the idea of a field, okay you have to at some level accept that there are fields that permeate all of space. But what this intellectual cost buys you is that now an object only has to sense the field local to it to respond to all objects in the universe.
Think of objects bobbing on the ocean. One way to conceptualize that is that any object anywhere could cause this object here to bob in some way. How does this object know about all the other objects? Instead we could say that there is ocean everywhere. Locally, objects bobbing put ripples into the ocean. Locally, ripples cause objects to bob. Each object no longer needs to "know about" every other object it just needs to react to the ripples at its location, and the ripples get sent out from its location.
Does this help?
Imagine two circles in 2D that repel each other the closest you get them together, like magnets do. In 2D it would look like they're interacting at a distance, but maybe in 3D they're two cylinders that are a bit flexible, that are actually touching at the ends, but not in the 2D plane you're observing. The interaction is "properly physical" in 3D but in the 2D plane it seems magical.
That's a way that I imagine it in 2D vs 3D, so this might be similar in 3D vs ND, where N > 3. Of course this is all baseless speculation, but it seems kinda plausible in my head.
Edit: bad drawing of what I meant: https://imgur.com/362tcHg
Maxwell picked up this idea and ran with it, developing a mathematical theory for the dynamics of the electromagnetic field. Instead of one object somehow magically interacting at a distance, interactions between objects resulted from changes in the electromagnetic field that propagated through space.
The final paragraphs of Maxwell's "Treatise on Electricity and Magnetism" are somewhat relevant.
This is 30-40 years after Faraday first wrote about lines of force, and there still wasn't really consensus about how to explain electromagnetic phenomena.
[emphasis added by me]
> Chapter XXII: Theories of Action at a Distance
> ...
> There appears to be in the minds of these eminent men, some prejudice, or a priori objection, against the hypothesis of a medium in which the phenomena of radiation of light and heat, and the electric actions at a distance take place. It is true that at one time those who speculated as to the causes of physical phenomena, were in the habit of accounting for each kind of action at a distance by means of a special aethereal fluid, whose function and property it was to produce these actions. They filled all space three and four times over with aethers of different kinds, the properties of which were invented merely to 'save appearances,' so that more rational enquirers were willing rather to accept not only Newton's definite law of attraction at a distance, but even the dogma of Cotes, that action at a distance is one of the primary properties of matter, and that no explanation can be more intelligible than this fact. Hence the undulatory theory of light has met with much opposition, directed not against its failure to explain the phenomena, but against its assumption of the existence of a medium in which light is propagated.
> We have seen that the mathematical expressions for electrodynamic action led, in the mind of Gauss, to the conviction that a theory of the propagation of electric action would be found to be the very key-stone of electrodynamics. Now we are unable to conceive of propagation in time, except either as the flight of a material substance through space, or as the propagation of a condition of motion or stress in a medium already existing in space.
> Hence all these theories lead to the conception of a medium in which the propagation takes place, and if we admit this medium as a hypothesis, I think it ought to occupy a prominent place in our investigations, and that we ought to endeavour to construct a mental representation of all the details of its actions, and this has been my constant aim in this treatise.
Dead Comment
So unless you have a good reason to do something else, and the budget to pay experienced people to bash their heads against it, you should stick to an implementation that has had this effort expended on it.
If you want an intro about common problems in custom cryptosystems, go look at cryptopals or something, but don't get too cocky that you know everything.
Even in each of those, there are two "levels" of implementation: specifying an exact algorithm that implements a solution to problem x, and actually producing the code that implements the algorithm.
At some level, there is no ready-made solution to every problem. Even if the foundations are implemented by "somebody else", the line's blurry. At which level of (lack of) expertise and which level of "lowness" of the implementation should I start to worry?
Oh, by all means, roll your own crypto, break it, and roll it again. Just do not use it.
Also, break other people's crypto and study theory.
By the way, the advice is not "unless you are qualified". Nobody is qualified to just roll their own. Good crypto is a community project and can not happen without reviewers.
More generally, security is like any other field. You have to evaluate arguments based on the logic and evidence given. The main difference is that with crypto, it is much easier to shoot yourself in the foot and have catastrophic failure, since you have to be perfect and the attackers just have to be right once to totally own you. Thus the industry has standardized on a few solutions that have been checked really really well.
More generally, if you are interested, i would say read the actual papers. The papers on bcrypt, argon2 etc explain what problems they are trying to solve, usually by contrasting with previous solutions that have failed in some fashion. That doesn't mean reading the paper will explain everything or make you an expert or qualify you to roll your own crypto. Nor should you believe just because a paper author says something is a good idea that it actually is. It will however explain why slow hash function like bcrypt/argon2/scrypt were created and are better choices than the previous solutions in the domain like md5.
It's true, but you need to realize that you're qualified enough only when you understand that you shouldn't roll out your own crypto.
In my opinion, the only person who has credibly demonstrated being able to roll his own crypto is djb (http://cr.yp.to/)
> but isn’t all security obscuring something,
Keeping a secret isn't "obscuring" something, it's hiding it entirely. Security through obscurity is bad because it relies on attackers being dumb. The smartest person in the world cannot be expected to guess a well chosen and kept secret.
Edit: I should add that even if you are an expert in cryptanalysis, you still shouldn’t just roll your own crypto. It’s the analysis of the entire community, not the credentials of the author, that makes modern cryptography so strong.
It's not circular, it's a simple flowchart.
Are you writing an app or are you trying to invent more advanced crypto?
"writing an app" -> dont roll your own crypto
"invent more advanced crypto" -> go learn and research crypto history, math, etc..
That's from 10 years ago, so you might be able to find video of a more recent version; try to find a year when Wagner taught, he's great.
By attacking crypto--a lot. And submitting your crypto to be attacked by others--a lot. It's the only way to develop the requisite level of humility to design good crypto.
If any of this is true, are there any sources aside from "my friend's friend's brother took too much and now he is....", and what is the scientific explanation and do we know enough about the mind at all?
I feel like LSD has a lot of contradictory information out there, and the proponents feel the need to hand waive concerns away because it is 'completely harmless and leaves your system in 10 hours'. But when nobody knows what they're actually getting because it doesn't exist in a legal framework, then it muddies the whole experience.
People say certain doses can't do more effect than lower doses after a certain threshold. It seems like the same people say "omg man 1000ug you are going to fry your brain!"
What is the truth? If it "just" had an FDA warning like "people with a family history of schizophrenia should not take it", that would be wildly better than what we have today.
Please no explanation about shrooms. Just LSD the 'research chems' distributed as LSD.
Tangential, and not an answer to your question, but if you're like me, you will be fascinated to learn that there is a drug (MPPP, synthetic opiate) that if cooked incorrectly yields "MPTP"[1] which will give you Parkinsons. As in, forever. You take this drug (at any age) and then you have Parkinsons for the rest of your life.
[1] https://en.wikipedia.org/wiki/MPTP
On one hand you have anti-drug people, usually backed by the authorities. Listen to them and all drugs will make your body rot, give you hallucinations like datura, and for some reason cause complete addiction after a single dose.
Drug users on the other hand will tell you that it not as bad as alcohol/tobacco/coffee/... that concerns are unfounded, that police is the only risk, etc...
The truth is almost impossible to find. Even peer reviewed research is lacking. I guess there are several reasons for that. Availability of controlled substances. Ethical concerns regarding experimentation. Issues with neutrality.
Now from what I gathered about LSD (and psychedelics in general): these are very random. If you take a reasonable dose, you are most likely going to have a nice, fun trip and nothing more. But it can also fuck you up for years, or maybe bring significant improvement in your life. High doses increase the chance of extreme effects and nasty bad trips, but it shouldn't kill you unless you are dealing with industrial quantities. The substance itself is not addictive, but the social context may be. The big problem is that there is no way to tell how it will go for you. There are ways to improve your chances, but it will always be random.
As for fake LSD, there are cheap reagent tests for that. They are not 100% reliable but that's better than nothing. You can also send your sample anonymously to a lab that will do a much more accurate GC/MS analysis for you.
Sure, some ("plenty", in absolute numbers) will tell you this, but I don't recall being in many forums where that attitude doesn't get significant pushback (as opposed to the anti-drug community). The modern "pro drug" community has a fairly significant culture of safety within it, unlike back in the sixties.
> The truth is almost impossible to find.
There is plentiful anecdotal evidence online. Any clinical evidence, if they ever get around to doing it in any significant volumes, will be utterly miniscule (and I highly doubt more trustworthy, considering what you're working with, and the size of the tests that will be done) to the massive volume of trip reports and Q&A available online, much from people who know very well what they're talking about, not unlike enthusiasts in any domain.
> Now from what I gathered about LSD (and psychedelics in general): these are very random.
Depends on one's definition of random.
> If you take a reasonable dose, you are most likely going to have a nice, fun trip and nothing more.
Effects vary by dose of course, but I've seen little anecdotal evidence that suggest high doses have a different outcome, and plenty that suggests the opposite.
> But it can also fuck you up for years, or maybe bring significant improvement in your life.
See: https://rationalwiki.org/wiki/Balance_fallacy
> The big problem is that there is no way to tell how it will go for you. There are ways to improve your chances, but it will always be random.
I believe this to be true, but don't forget the fallacy noted above.
That said, these things are not toys - extreme caution is warranted.
I had a fling with psychedelics in my teens, and everything was great until the one time it wasn't. I was taking psychedelics pretty much every weekend, and by my count have tried over a dozen of them.
Had an experience with LSD which completely shook me to my core and gave me such severe PTSD and trauma that every night I started to have massive panic attacks and needed medical help. My entire worldview and perception of reality was shattered, I wasn't able to "anchor" myself anymore and it all felt like a sham. I was completely dissociated. I also got HPPD: to this day, everything has a sharpened oil-painting type texture to it that increases based on my anxiety level, and I'm sensitive to visual + aural stimuli (loud, brightly-colored places are unpleasant). If I get too anxious, I start to dissociate.
It took ~2 years for the PTSD to subside for the most part, but still if I am under a lot of stress I am liable to have a panic attack and get flashbacks and need to go find somewhere quiet to sit somewhere alone to try to work through it.
LSD being the particular substance has nothing to do with it, in my opinion. I was young, dumb, reckless, and played with fire then got burned. It could have happened with any of the other dozen psychedelics I took, but it just so happened to be LSD the one time that it did.
But I want to add, that while giving me the most nightmarish, traumatizing experience of my life, the best/most positively-profound experience has also been on the same substance. I grew up in a pretty abusive household and didn't do well forming relationships growing up, and had a lot of anger and resentment in my worldview. After taking psychedelics (LSD, 2C-B, Shrooms) and MDMA with the right group of people a few times, my entire perspective shifted. For the first time in my life, it felt like I understand how it felt to be loved, and what "love" was, and how we're "all in this together" so we may as well be good to each other while we're here.
It's been a long time since I've touched any of that stuff and I'm not sure I ever will again, but I don't think it's inherently bad or good. Psychedelics are like knives, they're neutral - can be used as a tool or cut the hell out of you if you're reckless.
---
Footnote: For context, this was probably due to life circumstances/psyche at the time. I was in a relationship with a pretty toxic partner, and my mental state wasn't the greatest. In hindsight, it seems like I was almost begging for a "slap in the face" if you will.
I think it is a bit too reductive to say they're neutral, just yet, but I am willing to say they can be used responsibly if the right information actually existed - but like with any science I am open to changing that if the conclusions were found to be different. Again let's just stick with acid instead of all psychedelics.
> Permanent schizophrenic zombie, maybe a bit extreme, but severe and traumatic long-lasting psychological damage is a not-uncommon phenomena.
https://english.stackexchange.com/questions/6124/does-not-un...
https://towardsdatascience.com/an-introduction-to-multivaria...
HOW PSYCHEDELICS REVEALS HOW LITTLE WE KNOW ABOUT ANYTHING - Jordan Peterson | London Real --> https://www.youtube.com/watch?v=UaY0H9DBokA
Jordan Peterson - The Mystery of DMT and Psilocybin --> https://www.youtube.com/watch?v=Gol5sPM073k
> LSD being the particular substance has nothing to do with it, in my opinion. I was young, dumb, reckless, and played with fire then got burned. It could have happened with any of the other dozen psychedelics I took, but it just so happened to be LSD the one time that it did.
https://en.wikipedia.org/wiki/Hallucinogen_persisting_percep...
I have a close friend who had the same experience with excessive use of marijuana, but my money would be on psychedelics being far more likely to produce the outcome you unfortunately experienced. He's much better today, but not entirely "ok".
> But I want to add, that while giving me the most nightmarish, traumatizing experience of my life, the best/most positively-profound experience has also been on the same substance. I grew up in a pretty abusive household and didn't do well forming relationships growing up, and had a lot of anger and resentment in my worldview. After taking psychedelics (LSD, 2C-B, Shrooms) and MDMA with the right group of people a few times, my entire perspective shifted. For the first time in my life, it felt like I understand how it felt to be loved, and what "love" was, and how we're "all in this together" so we may as well be good to each other while we're here.
This sounds rather similar to my friend's story.
Can Taking Ecstasy (MDMA) Once Damage Your Memory?
https://www.sciencedaily.com/releases/2008/10/081009072714.h...
According to Professor Laws from the University’s School of Psychology, taking the drug just once can damage memory. In a talk entitled "Can taking ecstasy once damage your memory?", he will reveal that ecstasy users show significantly impaired memory when compared to non-ecstasy users and that the amount of ecstasy consumed is largely irrelevant. Indeed, taking the drug even just once may cause significant short and long-term memory loss. Professor Laws findings are based on the largest analysis of memory data derived from 26 studies of 600 ecstasy users.
> (from your comment below) I took 300ug of LSD recklessly on a particularly bad day for me, in a particularly uncomfortable setting.
https://www.trippingly.net/lsd/2018/5/3/phases-of-an-lsd-tri...
Lots of details, plus dosage guide (25 ug and up) & typical experinces
https://www.reddit.com/r/LSD/comments/34acza/do_you_guys_bel...
imo 300ug is the point where you need to have some serious experience with tripping to be able to handle yourself. because if you're coming up, the acid is already circulating your bloodstream, and you get that horrible sinking sensation of thinking you've taken too much... you're in for a really bad time if you don't know how to control the trip.
I think it's difficult to say how big a dose really is until you've had a bad trip on it. only then can you see how insidious everything can get and as such just how intense 300ug can be. the reason people say not to start on doses like that is so they will AVOID those horrible experiences. so yeah, 300ug is a large dose, just because if shit goes wrong on it then you're fucked.
My good friend took "something" once (hard to tell what the dealer is selling you) and ended up in a mental institution, and is now in fact officially mentally disabled and on drugs for life. The drugs keep him stable enough that he's able to work, although he's still just a shadow of his former self.
Trust (knowing the chemist directly, indirectly, ...) in specific individuals > a largely unknown (but known to be imperfect) system, for many people anyways. Obviously this isn't practical for the not well connected, but it's all we got for now.
But as for your question, I've seen little to suggest it's anything more than war on drugs propaganda and hearsay.
https://en.m.wikipedia.org/wiki/Reefer_Madness
https://en.m.wikipedia.org/wiki/Chinese_whispers
even in this very thread there is someone that has been in the mental hospitals and seen problems "with their own two eyes", but is unwilling to name names as part of a code to remove any social/legal/professional consequence for themselves or the "crazy" people there
Deleted Comment
"lasting permanent changes, obviously"
"I’ve personally seen several people experience total amnesia after tripping on high doses." No further information.
"Not lasting permanent changes"
what.
Names, sources, medical records, news reports, court cases, there has got to be something out there!
How do I know this?
I have mental illness in my family and have spent considerable amounts of time at those facilities.
What? How is it clear? As you wrote yourself, correlation is not causation.
So many questions.
Where is the medical journal that says your conclusion "vast majority of mental health patients have a self-reported history of drug use of these specific drugs". I guess it can't exist because its crazy people self reporting a variety of substances that even the user would have no idea what was actually given to them.
Deleted Comment