Readit News logoReadit News
adrian_b commented on Mathematical secrets of ancient tablet unlocked after nearly a century of study (2017)   theguardian.com/science/2... · Posted by u/surement
drdec · 20 hours ago
If you do not accept that space is infinitely divisible, then the diagonal of a unit square does not actually exist in the space.
adrian_b · 4 hours ago
A long time has passed since the paradoxes of Zeno of Elea, so now there really is no reason for not accepting that space is infinitely divisible.

The error of Zeno of Elea was that he did not understand the symmetry between zero and infinity (or he pretended to not understand it).

Because of this error, Zeno considered that infinity is stronger than zero, so he believed or pretended to believe that zero times infinity is infinity, instead of recognizing that zero times infinity can be any number and also zero or infinity.

For now, there exists no evidence whatsoever that the physical space and time are not infinitely divisible.

Even if in the future it would be discovered that space and time have a discrete structure, the mathematical model of an infinitely divisible space and time would remain useful as an approximation, because it certainly is simpler than whatever mathematical model would be needed for a discrete space and time.

adrian_b commented on Mathematical secrets of ancient tablet unlocked after nearly a century of study (2017)   theguardian.com/science/2... · Posted by u/surement
numpy-thagoras · a day ago
Alright, I'll bite:

To defend Wildberger a bit (because I am an ultrafinitist) I'd like to state first that Wildberger has poor personal PR ability.

Now, as programmers here, you are all natural ultrafinitists as you work with finite quantities (computer systems) and use numerical methods to accurately approximate real numbers.

An ultrafinitist says that that's really all there is to it. The extra axiomatic fluff about infinities existing are logically unnecessary to do all the heavy lifting of the math that we are familiar with. Wildberger's point (and the point of all ultrafinitist claims) is that it's an intellectual and pedagogical disservice to teach and speak of, e.g. Real Numbers, as if they're actually involving infinite quantities that you can never fully specify. We are always going to have to confront the numerical methods part, so it's better to make teaching about numbers methodologically aligned with how we actually measure and use them.

I have personally been working on building various finite equivalents to familiar math. I recommend anyone to read Radically Elementary Probability Theory by Nelson to get a better sense of how to do finite math, at least at the theoretical level. Once again, on a practical level to do with directly computing quantities, we've only ever done finite math.

adrian_b · 5 hours ago
I wonder what ultrafinitists do about topology.

Topology, i.e. the analysis of connectivity, is built upon the notion of continuity and infinite divisibility, which seems to be difficult to handle in an ultrafinitist way.

Topology is an exceedingly important branch of mathematics, not only theoretically (I consider some of the results of topology as very beautiful) but also practically, as a great part of the engineering design work is for solving problems where only the topology matters, not the geometry, e.g. in electronic schematics design work.

So I would consider any framework for mathematics that does not handle well topology as incomplete and unusable.

Ultrafinitist theories may be interesting to study as an alternative, but the reality is that infinitesimal calculus in its modern rigorous form does not need any alternatives, because it works well enough and until now I have not seen alternatives that are simpler, but only alternatives that are more complicated, without benefits sufficient to justify that.

I also wonder what ultrafinitists do about projective geometry and inversive geometry.

I consider projective geometry as one of the most beautiful parts of mathematics. When I encountered it for the first time when very young, it was quite a revelation, due to the unification that it allows for various concepts that are distinct in classic geometry. The projective geometry is based on completing the affine spaces with various kinds of subspaces located at an "infinite" distance.

Without handling infinities, and without visualizing how various curves located at infinity look like (as parts of surfaces that can be seen at finite distances), projective geometry would become very hard to understand, even if one would duplicate its algorithms while avoiding the names related to "infinity".

Similarly for inversive geometry, where the affine spaces are completed with points located at "inifinity".

Such geometries are beautiful and very useful, so I would not consider as usable a variant of mathematics where they are not included.

adrian_b commented on Nvidia's new 'robot brain' goes on sale for $3,499   cnbc.com/2025/08/25/nvidi... · Posted by u/tiahura
CamperBob2 · a day ago
128 GB for $3,499 doesn't sound bad at all.
adrian_b · 9 hours ago
You can get the same memory (including approximately the same bandwidth) in a Strix Halo system at half this price.

Therefore it sounds quite bad. Like Orin before it, Thor is severely overpriced.

It is worthwhile only for those who absolutely need some of its features that are not available elsewhere, like automotive certification or no need of additional boards when interfacing with a great number of video cameras.

adrian_b commented on Nvidia's new 'robot brain' goes on sale for $3,499   cnbc.com/2025/08/25/nvidi... · Posted by u/tiahura
jsight · a day ago
This sounds very similar to the dgx spark, which still hasn't shipped afaik.
adrian_b · 9 hours ago
The CPU cores are very different (big Cortex-X925 + small Cortex-A725 vs. medium-size Cortex-X4 = Neoverse V3AE).

The CPU of DGX Spark has better single-threaded performance, while that of Thor has better multi-threaded performance per die area and per power consumption.

adrian_b commented on Nvidia's new 'robot brain' goes on sale for $3,499   cnbc.com/2025/08/25/nvidi... · Posted by u/tiahura
beefnugs · 12 hours ago
Its not even close dude, the nvidia stuff is like 2000 TOPS vs the 50 you get from the ai 395+
adrian_b · 10 hours ago
True, but this advantage is strictly for AI inference and only when using the very low resolution FP4 and sparse matrices.

When using bigger number formats and/or dense matrices the advantage of Thor diminishes considerably.

Also the 50 Tops is only from the low power NPU. When distributing the computation also over GPU and CPU you get much more. So for a balanced comparison one has to divide the Thor value by 4 or more and multiply the Ryzen value by a factor that might be around 3 or even more.

The Ryzen CPU is significantly better, and the GPU has about the same size but a much higher clock frequency, so it should also be faster, so for anything except AI inference a Ryzen Max at half price will offer much more bang for the buck.

adrian_b commented on Nvidia's new 'robot brain' goes on sale for $3,499   cnbc.com/2025/08/25/nvidi... · Posted by u/tiahura
jauntywundrkind · a day ago
Wow: notably a more advanced CPU than DGX GB200! 14 Neoverse V3AE cores, where-as Grace Hopper is 72x Neoverse V2. Comparing versus big GB100: 2560/96 CUDA/Tensor cores here vs big Blackwell's 18432/576 cores.

> Compared to NVIDIA Jetson AGX Orin, it provides up to 7.5x higher AI compute and 3.5x better energy efficiency.

I could really use a table of all the various options Nvidia has! Jetson AGX Orin (2023) seems to start at ~$1700 for a 32GB system, with 204GB/s bandwidth, 1792 Ampere, 56 Tensor, & 8 A78AE ARM Cores, 200 TOPS "AI Performance", 15-45W. Slightly bigger model of 2048/64/12 cores/275 TOPS, 15-60W available. https://en.wikipedia.org/wiki/Nvidia_Jetson#Performance

Now Jetson T5000 is 2070 TFLOPS (but FP4 - Sparse! Still ~double-ish). 2560 Core Blackwell, 96 Tensor cores, 14 Neoverse V3AE cores. 273GB/s 128GB. 4x25Gbe is a neat new addition. 40-130W. There's also a lower spec T4000.

Seems like a pretty in line leap at 2x the price!

Looks like a physically pretty big unit. Big enough to scratch my head in the intro video of robots opening up the package & wonder: where are they going to fit their new brain? But man, the breakdown diagram: it's- unsurprisingly- half heatsink.

adrian_b · 10 hours ago
It should be noted that Neoverse V3AE and Neoverse V3 are the automotive/server versions of the Cortex-X4 core, which is well known from many smartphones (and which is similar in performance to the Skymont E-cores of the Intel Lunar Lake, Arrow Lake S and Arrow Lake H CPUs).

While the Cortex-X925, the successor of Cortex-X4, has better absolute performance, it has much worse performance per die area. Therefore, for a CPU where the best multi-threaded performance is desired, Neoverse V3AE/Neoverse V3/Cortex-X4 remains the best CPU core designed by the Arm company.

This year's Arm core announcements have been delayed and it is not clear how the future Cortex-A930 and Cortex-X930 will compare with the currently existing Cortex-X4, Cortex-A725 and Cortex-X925.

adrian_b commented on A simple way to generate random points on a sphere   johndcook.com/blog/2025/0... · Posted by u/piinbinary
nwallin · 3 days ago
Accept-reject methods are nonstarters when the architecture makes branching excessively expensive, specifically SIMD and GPU, which is one of the domains where generating random points on a sphere is particularly useful.

The Box-Muller transform is slow because it requires log, sqrt, sin, and cos. Depending on your needs, you can approximate all of these.

log2 can be easily approximated using fast inverse square root tricks:

    constexpr float fast_approx_log2(float x) {
      x = std::bit_cast<int, float>(x);
      constexpr float a = 1.0f / (1 << 23);
      x *= a;
      x -= 127.0f;
      return x;
    }
(conveniently, this also negates the need to ensure your input is not zero)

sqrt is pretty fast; turn `-ffast-math` on. (this is already the default on GPUs) (remember that you're normalizing the resultant vector, so add this to the mag_sqr before square rooting it)

The slow part of sin/cos is precise range reduction. We don't need that. The input to sin/cos Box-Muller is by construction in the range [0,2pi]. Range reduction is a no-op.

For my particular niche, these approximations and the resulting biases are justified. YMMV. When I last looked at it, the fast log2 gave a bunch of linearities where you wanted it to be smooth, however across multiple dimensions these linearities seemed to cancel out.

adrian_b · 11 hours ago
The range reduction of sin/cos is slow only because of the stupid leftover from the 19th century that is the measuring of the phases and plane angles in radians.

A much better unit for phase and plane angle is the cycle, where range reduction becomes exact and very fast.

The radian had advantages for symbolic computation done with pen and paper, by omitting a constant multiplicative factor in the derivation and integration formulae.

However, even for numeric computations done with pen and paper, the radian was inconvenient so in the 19th century the sexagesimal degrees and the cycles continued to be used in parallel with the radians.

Since the development of automatic computers there has remained no reason whatsoever to use the radian for anything. Radians are never used in input sensors or output actuators, because that can be done only with low accuracy, but in physical inputs and outputs angles are always measured in fractions of a cycle. Computing derivatives or integrals happens much more seldom than other trigonometric function evaluations. Moreover, the functions that are derived or integrated almost always have another multiplicative factor in their argument, e.g. frequency or wavenumber, so that factor will also appear in the derivation/integration formula and the extra multiplication with 2Pi that appears in the derivative (or with 1/2Pi that appears in the integral) can usually be absorbed in that factor, or in any case it can be done only once for a great number of function evaluations. Therefore switching the phase unit of measurement from radian to cycle greatly reduces the amount of required computation, while also increasing the accuracy of the results.

A mathematical library should implement only the trigonometric functions of 2Pi*x, and their inverses, which suffice for all applications. There is no need for the cos and sin functions with arguments in radians, which are provided by all standard libraries now.

For reasons that are completely mysterious for me, the C standard and the IEEE standard of FP arithmetic have been updated to include the trigonometric functions of Pi*x, not the functions of 2Pi*x.

It is completely beyond my power of comprehension why this has happened. All the existing applications want to measure the phases in cycles, not in half-cycles. At most there are some applications where measuring the phases or plane angles in right angles could be more convenient, i.e. those might like trigonometric functions of (Pi/2)*x, but again not even in those cases there is any use for half-cycles.

So with the functions of the updated math libraries, e.g. cospi and sinpi, you hopefully can avoid the slow range reduction, but you still have to add a superfluous scaling by 2, due to the bad function definitions.

Similar considerations apply to the use of binary logarithms and exponentials instead of the slower and less accurate hyperbolic (a.k.a. natural) logarithms and exponentials.

adrian_b commented on A simple way to generate random points on a sphere   johndcook.com/blog/2025/0... · Posted by u/piinbinary
legobmw99 · 2 days ago
Does this generalize to higher dimensions? I’m realizing my mathematics education never really addressed alternative coordinate systems outside of 2/3 dimensions
adrian_b · 12 hours ago
The last method mentioned at that wolfram.com link should work for any dimension (i.e. choosing random Cartesian coordinates with a normal distribution, then normalizing the vector obtained thus to get a point on the sphere).

The method presented in the parent article is derived exactly from this method of Marsaglia, and it should also work for any dimension.

adrian_b commented on Children of the Geissler Tube (2023)   hopefulmons.com/p/childre... · Posted by u/paulkrush
paulkrush · 3 days ago
OK, I get it now: "The article conflates two parallel branches with shared glass/vacuum know-how when it starts talking about diodes."
adrian_b · 12 hours ago
The article is right that the vacuum tubes, whose first application were the Edison lighting bulbs, then the Fleming diodes, which evolved directly from the incandescent lighting bulbs, are descendants of the Geissler tubes.

During the evolution of the Geissler tubes, the techniques of making efficient vacuum pumps and of sealing well the glass tubes were developed.

Only when the pumps and the glass tubes had become good enough, it became possible to experiment with the first vacuum tubes, by omitting the filling of the tubes with low-pressure gas.

Without the decades of playing with Geissler tubes there would have never been any incentive to develop the technologies without which making vacuum tubes would have been impossible.

Therefore there is no doubt that what the article says is correct, i.e. that the vacuum tubes are descendants of the low-pressure gas tubes, which were initially known as Geissler tubes.

So the vacuum tubes are a lateral branch of the development of the low-pressure gas tubes. After splitting, both branches have continued to evolve in parallel until they both have been replaced in most of their applications by semiconductor devices.

While vacuum tubes were better known by the general public, because they were present in things like radio receivers or TV sets, which many people owned, in industrial applications gas tubes have always had a similar importance to vacuum tubes.

Even the first electronic counter, which can be considered the ancestor of all electronic computers, has been made with gas tubes, not with vacuum tubes. (The first electronic counters were made to count the pulses from detectors of nuclear or cosmic radiation, for which the existing electro-mechanical counters were too slow. The circuits developed for this application were the basis for the development during WWII of the digital electronic circuits used in the first electronic computers.)

adrian_b commented on Children of the Geissler Tube (2023)   hopefulmons.com/p/childre... · Posted by u/paulkrush
kevin_thibedeau · 3 days ago
Neon indicator bulbs are technically Geissler tubes.
adrian_b · 12 hours ago
Approximately.

The Geissler tubes used for lighting, including those filled with neon, use the light of the so called "positive column" of gas, which emits light from almost the entire length of the tube, regardless of its form, which also allows to curve the tube, e.g. in the form of a letter.

The neon indicator bulbs use the so-called "negative light", which is emitted from a small region around the metallic cathode, while the rest of the volume of the bulb emits no light. Because the light-emitting zone is around the cathode, shaping the metallic cathode, e.g. in the form of a digit, will give the same form to the light.

When electric discharges are done in a tube with low-pressure gas, there may be various parts of the tube that emit light, depending on the pressure of the gas, on the applied voltage and on the dimensions and form of the tube and of the electrodes.

While the emitted light can also have other aspects, only 2 variants are used in practical applications, the positive column light for general lighting and the negative cathode light for indicator or display applications.

u/adrian_b

KarmaCake day14050February 7, 2017View Original