Readit News logoReadit News
dottrap commented on WinObjC – The Windows Bridge for iOS   github.com/microsoft/WinO... · Posted by u/zerr
sillywalk · 2 months ago
Interesting, I never knew about this. I do remember that NeXT ported OPENSTEP to Windows NT as OPENSTEP Enterprise.
dottrap · 2 months ago
Microsoft's timing was the worst. I think they announced this a year after Swift was already announced, but before Swift was open sourced. So Microsoft wouldn't be able to deal with things like Obj-C/Swift interop which iOS developers were already jumping onto. And Microsoft's Windows 8 mobile initiative was pretty clearly a flop by this point.

Frankly, this Obj-C effort needed to be done way earlier, starting with AppKit, like back when Microsoft was panicking that OS X 10.4 Tiger was going to kick Longhorn's butt. If these tools had already been proven useful before the dawn of the iPhone, Microsoft might have had a chance of riding the iOS wave.

dottrap commented on Lua 5.5.0 (Beta) Released   lua.org/work/#5.5.0... · Posted by u/dottrap
dottrap · 6 months ago
Main changes (since 5.4)

  declarations for global variables
  for-loop variables are read only
  floats are printed in decimal with enough digits to be read back correctly.
  more levels for constructors
  table.create
  utf8.offset returns also final position of character
  external strings (that use memory not managed by Lua)
  new functions luaL_openselectedlibs and luaL_makeseed
  major collections done incrementally
  more compact arrays (large arrays use about 60% less memory)
  lua.c loads 'readline' dynamically
  static (fixed) binaries (when loading a binary chunk in memory, Lua can reuse its original memory in some of the internal structures)
  dump and undump reuse all strings
  auxiliary buffer reuses buffer when it creates final string

dottrap commented on Teal – A statically-typed dialect of Lua   teal-language.org/... · Posted by u/generichuman
90s_dev · 7 months ago
Wait, Pallene just compiles to C using whatever local C compiler?

https://github.com/pallene-lang/pallene/blob/master/src/pall...

Well that's kinda disappointing. I expected something more in 2025, like directly generating asm like a lot of languages are starting to do.

And your article makes it ambiguous whether it's from the Lua authors or grad students. I assume it started out just the students and then the Lua authors joined in?

dottrap · 7 months ago
One of Lua's goals has been extreme portability, and the main implementation works on anything that has a C compiler, going to the extreme of compiling cleanly on C89, C99, and even compiling as C++ (no extern "C"). Remember that Lua is popular in the embedded space too, so this is a big feature.

Pallene isn't designed to be a new native language on its own. Pallene is designed to be a companion language for Lua, specializing in a subset of performance.

But as importantly, Pallene isn't just compiling to C. Pallene is generating C code that directly manipulates the underlying Lua internals, which are in C.

The research thesis is that many bottlenecks are due to boxing and unboxing going through an FFI. Memory safety also incurs overhead. Python is an extreme example of how excruciatingly slow this can be, but even Lua incurs costs for this. A core tenant of the Pallene compiler is that it can generate C code that gets to cheat like crazy. Pallene gets to directly access Lua internals and things like arrays directly manipulate underlying C arrays deep inside, which sidesteps boxing/unboxing. The compiler can do the analysis to make sure it doesn't cheat in a way that is unsafe. Finally, the C optimizer now also has a chance to perform optimizations. And now operations such as crunching math on arrays of numbers may get much faster because now you get generated code that is more CPU friendly and may benefit more from prefetching and cache locality.

Pallene is built from the the extreme compatibility goals as Lua since it is designed to work with it. It it only depends on any C compiler and Lua itself. If you get Lua compiled, then you can get Pallene working. That means any existing project that uses Lua (5.4) could start adding Pallene modules to their project for new features or to try to improve performance in key areas. Since Pallene just outputs a Lua modules, it looks like any other Lua module implemented in C, so it won't create new portability constraints that you didn't have before. This is different than say LuaJIT, where not all platforms may allow JIT or you may be targeting a new CPU architecture that LuaJIT does not support.

Both Teal and Pallene were started by grad students of Roberto's. Since Roberto has started giving talks himself on Pallene, I'm assuming they are joining in.

dottrap commented on Teal – A statically-typed dialect of Lua   teal-language.org/... · Posted by u/generichuman
pansa2 · 7 months ago
> I really wish the Lua authors would add official types to Lua.

Never going to happen IMO. Adding static types would change the nature of the language completely, even more than it has in Python.

As Teal shows, it would require giving up one of Lua's core features: tables as the language's single data structure. It would significantly complicate a language known for its simplicity.

Even the implementation would need to change radically - adding a type checker would invalidate the current approach of using a single-pass source-to-bytecode compiler.

dottrap · 7 months ago
>> I really wish the Lua authors would add official types to Lua.

> Never going to happen IMO. Adding static types would change the nature of the language completely, even more than it has in Python.

You both are kind of right.

The Lua authors have been working on the new companion language to Lua named Pallene. Pallene is a subset of Lua that adds types, not for the sake of types themselves, but for the purpose of performance. The Pallene compiler can generate optimized native code that potentially removes the need to manually write a module for Lua in C.

The other cool trick is that Pallene and Lua are completely interoperable with each other, so Pallene can be added to existing Lua projects, and you can opt to use regular Lua for the dynamic parts of your code where compilers won't be able to optimize much and strong types might be more trouble than help.

Here is a talk Roberto Ierusalimschy gave about Pallene. https://www.youtube.com/watch?v=pGF2UFG7n6Y

dottrap commented on SDL3 new GPU API merged   github.com/libsdl-org/SDL... · Posted by u/caspar
jandrese · a year ago
I'm using SDL2 2.30.0. The main loop is pretty simple, it does a few SDL_RenderFillRects to create areas, then several SDL_RenderCopy where the source is a SDL_Texture created from a SDL_Surface using SDL_CreateTextureFromSurface that was loaded from files at boot. A final call to SDL_RenderPresent finishes it off. They do include an alpha channel however.

I was expecting the sprite blitting to be trivial, but it is surprisingly slow. The sprites are quite small, only a few hundred pixels total. I have a theory that it is copying the pixels over the X11 channel each time instead of loading the sprite sheets onto the server once and copying regions using XCopyArea to tell the server to do its own blitting.

dottrap · a year ago
This should be plenty fast. SDL_RenderCopy generally should be doing things the 'right' way for on any video card made roughly in the last 15ish years (basically binding a texture in GPU RAM to a quad).

You probably need to due some debugging/profiling to find where your problem is. Make sure you aren't creating SDL_Textures (or loading SDL_Surfaces) inside your main game play loop. You also may want to check what backend the SDL_Renderer is utilizing (e.g. OpenGL, Direct3D, Vulkan, Metal, software). If you are on software, that is likely your problem. Try forcing it to something hardware accelerated.

Also, I vaguely recall there was a legacy flag on SDL_Surfaces called "hardware" or "SDL_HWSURFACE" or "SDL_HWACCEL" or something. Don't set that. It was a a very legacy hardware from like 25 years ago that is slow on everything now.

dottrap commented on SDL3 new GPU API merged   github.com/libsdl-org/SDL... · Posted by u/caspar
HexDecOctBin · a year ago
I think you are getting confused between SDL_Render and SDL_GPU. SDL_Render is the old accelerated API that was only suitable for 2D games (or very primitive looking 3D ones). SDL_GPU is a fully-featured wrapper around modern 3D APIs (well, the rasteriser and compute parts anyway, no raytracing or mesh shaders there yet).
dottrap · a year ago
I was referencing the historical motivations that led to where we are today. Yes, I was referring in part to the SDL_Render family APIs. These were insufficient to support things like Nuklear and Dear ImGui, which are reasonable use cases for a simple 2D game, which SDL hoped to help with by introducing the SDL_Render APIs in SDL 2.0 in the first place.

https://www.patreon.com/posts/58563886

Short excerpt:

    One day, a valid argument was made that basic 2D triangles are pretty powerful in themselves for not much more code, and it notably makes wiring the excellent Dear Imgui library to an SDL app nice and clean. Even here I was ready to push back but the always-amazing Sylvain Becker showed up not just with a full implementation but also with the software rendering additions and I could fight no longer. In it went.
The next logical thing people were already clamoring for back then was shader support. Basically, if you can provide both batching (i.e. triangles) and shaders, you can cover a surprising amount of use cases, including many beyond 2D.

So fast forwarding to today, you're right. Glancing at the commit, the GPU API has 80 functions. It is full-featured beyond its original 2D roots. I haven't followed the development enough to know where they are drawing the lines now, like would raytracing and mesh shaders be on their roadmap, or would those be a bridge too far.

dottrap commented on SDL3 new GPU API merged   github.com/libsdl-org/SDL... · Posted by u/caspar
bni · a year ago
Is this related to https://github.com/grimfang4/sdl-gpu ? Or is it a completely separate thing with the same name?
dottrap · a year ago
This is a separate thing with the same name. Although both share some common ideas. The grimfang4/sdl-gpu is a separate library used with SDL, while the new SDL GPU API is directly part of SDL. grimfang4/sdl-gpu is much older and works with today's SDL 2.

The grimfang4/sdl-gpu was one good way to take advantage of modern GPUs in a simple way and workaround the holes/limitations of the old SDL 2D API. The new SDL 3 GPU API will likely make the need for things like grimfang4/sdl-gpu redundant.

dottrap commented on SDL3 new GPU API merged   github.com/libsdl-org/SDL... · Posted by u/caspar
shmerl · a year ago
Why is SDL API needed vs gfx-rs / wgpu though? I.e. was there a need to make yet another one?
dottrap · a year ago
The old SDL 2D API was not powerful enough. It was conceived in the rectangle sprite blitting days, when video hardware was designed very differently and had drastically different performance characteristics. If you wanted anything more, OpenGL used to be 'the best practice'. But today, the landscape competes between Vulkan, Metal, and Direct3D, and hardware is centered around batching and shaders. Trying to target OpenGL is more difficult because OpenGL fragmented between GL vs. GLES and platform support for OpenGL varies (e.g. Apple stopped updating GL after 4.1).

A good example demonstrating where the old SDL 2D API is too limited is with the 2D immediate mode GUI library, Nuklear. It has a few simple API stubs to fill in so it can be adapted to work with any graphics system. But for performance, it wants to batch submit all the vertices (triangle strip). But SDL's old API didn't support anything like that.

The reluctance was the SDL maintainers didn't want to create a monster and couldn't decide where to draw the line, so the line was held at the old 2D API. Then a few years ago, a user successfully changed the maintainers' minds after writing a demonstration showing how much could be achieved by just adding a simple batching API to SDL 2D. So that shifted the mindset and led to this current effort. I have not closely followed the development, but I think it still aims to be a simple API, and you will still be encouraged to pick a full blown 3D API if you go beyond 2D needs. But you no longer should need to go to one of the other APIs to do 2D things in modern ways on modern hardware.

u/dottrap

KarmaCake day1373January 30, 2013View Original