Semi-accurate. For example, PCIe remains dominant in computing. PCIe is technically a serial protocol, as new versions of PCIe (7.0 is releasing soon) increase the serial transmission rate. However, PCIe is also parallel-wise scalable based on performance needs through "lanes", where one lane is a total of four wires, arranged as two differential pairs, with one pair for receiving (RX) and one for transmitting (TX).
PCIe scales up to 16 lanes, so a PCIe x16 interface will have 64 wires forming 32 differential pairs. When routing PCIe traces, the length of all differential pairs must be within <100 mils of each other (I believe; it's been about 10 years since I last read the spec). That's to address the "timing skew between lanes" you mention, and DRCs in the PCB design software will ensure the trace length skew requirement is respected.
>how can this be addressed in this massive parallel optical parallel interface?
From a hardware perspective, reserve a few "pixels" of the story's MicroLED transmitter array for link control, not for data transfer. Examples might be a clock or a data frame synchronization signal. From the software side, design a communication protocol which negotiates a stable connection between the endpoints and incorporates checksums.
Abstractly, the serial vs. parallel dynamic shifts as technology advances. Raising clock rates to shove more data down the line faster (serial improvement) works to a point, but you'll eventually hit the limits of your current technology. Still need more bandwidth? Just add more lines to meet your needs (parallel improvement). Eventually the technology improves, and the dynamic continues. A perfect example of that is PCIe.
The bigger question is if magnetic confinement fusion will lead to the best energy producing devices. Competitors include inertial confinement, pinches, or some other exotic method. If a magnetic confinement fusion device produces net power, it's going to be a stellarator.
Sources:
[1] https://en.wikipedia.org/wiki/Omnigeneity