Readit News logoReadit News
lxdesk commented on End of an Era   erasmatazz.com/personal/s... · Posted by u/marcusestes
lxdesk · 2 months ago
Crawford's work is worthy of study, as is the causation for why he experienced external failure. It embodies the "simulationist" aesthetic of game design: given enough modelled parameters, something emergent and interesting will happen. This was a trend of the 20th century: computers were new and interesting, and simulations did work when you asked them to solve physics problems and plan logistics. Why wouldn't it work for narrative?

But then you play the games, and they're all so opaque. You have no idea what's going on, and the responses to your actions are so hard to grasp. But if you do figure it out, the model usually collapses into a linear, repeatable strategy and the illusion of depth disappears. You can see this happening from the start, with Gossip. Instead of noticing that his game didn't communicate and looking for points of accessibility, he plunged further forward into computer modelling. The failure is one of verisimilitude: The model is similar to a grounded truth on paper, but it's uninteresting to behold because it doesn't lead to a coherent whole. It just reflects the designer's thoughts on "this is how the world should work", which is something that can be found in any comments section.

Often, when Crawford lectured, he would go into evo-psych theories to build his claims: that is, he was confident that the answers he already accepted about the world and society were the correct ones, and the games were a matter of illustration. He was likewise confident that a shooting game would be less thoughtful than a turn-based strategy game because the moment-to-moment decisions were less complex, and the goal should be to portray completeness in the details.

I think he's aware of some of this, but he's a stubborn guy.

lxdesk commented on Implementing a Forth   ratfactor.com/forth/imple... · Posted by u/todsacerdoti
0x445442 · 3 months ago
What kind of programs would one naturally reach for Forth as the optimal solution? It has always struck me as a very low level language but I rarely hear this caveat from its advocates.
lxdesk · 3 months ago
I would put it into these three categories:

1. Assembly coding within a REPL. Forth supports "load-and-store" without the additional bookkeeping steps of assembly. Once the program works, it can be incrementally rewritten into the assembly if needed, or used to bootstrap something else. Historically this is probably the single biggest usage, because the language works as a blunt instrument for that within the standard wordsets. Lots of programs on the early micros shipped with code that was developed with Forth, but with the Forth interpreter discarded at the last step; and where there is novel hardware and novel applications, Forth tends to come up as the bootstrap.

2. Minimal-dependencies coding. For the same reason that it's a good bootstrapping tool, Forth ends up being portable by assuming nothing. While different Forth systems are all subtly incompatible, the runtime model is small enough to wrangle into doing what you want. Stack machine VMs basically are "Forth with more sandbox and less human-readability".

3. "Big ideas" coding. The "human-readable stack machine" aspect means it's a useful substrate for language design - being programmable, you can shift the imperative interpreter model in the direction of new syntax and new general-purpose data structures, while still retaining a way to drop all the way down to assembly - the biggest downside is that this doesn't let you easily introduce existing library code, so bootstrapping from within Forth would take a long time and you would most likely get stuck on trivial string processing. But Forth as the second of a two-step process where you "compile to Forth" using something more batteries-included is actually pretty reasonable as an alternative to generating a binary or designing an original VM.

lxdesk commented on World Bank rejects El Salvador request for Bitcoin help   bbc.com/news/business-575... · Posted by u/verginer
desine · 4 years ago
They cite environmental and transparency concerns, but somehow I believe it's more about control and ensuring the continuation of the debt based fiat systems.

I have a litmus test I use when discussing global financial politics with friends - "What were the main factors you think lead to Ghaddafi's killing?"

lxdesk · 4 years ago
You don't even have to look to geopolitical analogies. It's an everyday thing, all the way down to basic "exclusive club" gatekeeping.

There's a longstanding tendency across financial systems historically to use the law to bar access to the "real" products for various reasons that happen to favor the incumbent elite. Instead, if you get any access, it's the version mediated by a middleman of some kind. There is often a rationalization in play, but the effective control over societal outcomes is the same.

Want to found a disruptive company in 16th century Europe? You had better have a royal charter.

Maybe it's the 19th century and you have a great invention: "Patent fees for England alone amounted to £100-£120 ($585) or approximately four times per capita income in 1860." [0]

You're a laborer in 1900, and you've pooled a little nest egg you want to use to trade stocks? You can't afford the real stuff, so you will have to play in a bucket shop.

You're a middle-class Black person in the 1950's US and you want to own a home or start a business? Redlining ensures that you won't get a good deal or your neighborhood of choice, neither will you get a loan from the major banks(at least, not one on reasonable terms).

And so I have to conclude that the whole basis of the debt system is always subject to some form of gatekeeping, at some point, and that's what has drawn people back to precious metal exchange over centuries, despite its limits. We've been through a long period where debt worked really well, because our economies experienced industrial growth patterns and could coexist within a stable framework(some world wars and interventions notwithstanding). That does not mean it's better or forever.

The same kind of framework is in the process of being enforced in cryptocurrency; cypherpunk-friendly privacy coins that have some adherence to Bitcoin's original spirit like Monero or ZCash have been delisted from most exchanges through regulatory pressure, while defanged "blockchain economy" tokens have stones-throw availability and heavy promotion. Meanwhile a substantial number of token exchange services will play games with your ability to withdraw to keys you own.

But I think that's going to be about as hopeless an endeavor as stopping music piracy was; it's abundantly clear that we're headed towards a long term trend of breakdown in "trust me" debt economies and their model of operation, even if some of the leaks get plugged in the near term in the way that Spotify "solved" piracy[1]; what "trust me" now results in at Internet scale is increasingly sophisticated ransomware hacking. So, while debt and lending itself could still exist and be a rewarding venture, tokens lacking credible mechanisms to back their fundamental value and consensus are going to wash out.

(I also think the El Salvador plan is a stunt - a way of marketing the country with a side of personal benefit - albeit one that could become consequential in surprising, unpredictable ways, in the way Bitcoin has been generally.)

[0] https://eh.net/encyclopedia/an-economic-history-of-patent-in... [1] https://www.digitalmusicnews.com/2018/03/22/music-piracy-spo...

lxdesk commented on Beware of Tight Feedback Loops (2020)   brianlui.dog/2020/05/10/b... · Posted by u/ZephyrBlu
danielmarkbruce · 4 years ago
I read the article and the example - I don't see it as an example of what he lays out. As you say - he laid out a bunch of questions and answered them. How is that testing anything against noisy data? Why are they intermediate steps? Where is the feedback loop? What if his answers are wrong and/or what if his sense for how likely they are is wrong?
lxdesk · 4 years ago
What he calls "world construction" involves the development of a rubric custom to the problem.

This creates a faster feedback loop inside of the larger, noisier one. Your feedback is now guided around the question of "what makes the rubric itself better?" This can be done on principle, with limited access to external information. Philosophical thinking is eminently suited to this style of problem, but it can be supplemented with short-term empirical studies that add some falsifying points and narrow your cone of uncertainty.

At the end you've generated a list of yes/no questions forming the rubric of whether the course of action is likely to succeed. It can be turned into a ranking score, or a pass-fail threshold.

If you're frustrated by the idea of just making it up on principle, that's a frustration with philosophy itself; it rarely "works" until you accept some pragmatic premises around what is "good" or "true". The point of having a large number of questions, using a wide variety of perspectives, is that they test the overall coherency of the premise. Something can work fine from one perspective, and then completely fail in another. When that happens, it's a good sign that you have more to improve.

It's quite an important life skill to practice. It's easy to go along with the crowd, but this is a way of breaking away from it.

lxdesk commented on It's probably time to stop recommending Clean Code (2020)   qntm.org/clean?tw=... · Posted by u/avinassh
TeMPOraL · 4 years ago
Thanks for the detailed evaluation. I'll start by reiterating that the project is a typical tile-based roguelike, so some of the concerns you mention in the second paragraph don't apply. Everything runs sequentially and deterministically - though the actual order of execution may not be apparent from the code itself. I mitigate it to an extent by adding introspection features, like e.g. code that dumps PlantUML graphs showing the actual order of execution of event handlers, or their relationship with events (e.g. which handlers can send what subsequent events).

I'll also add that this is an experimental hobby project, used to explore various programming techniques and architecture ideas, so I don't care about most constraints under which commercial game studios operate.

> The perceived behavior is intimately coupled to when its processing occurs and when the effects are "felt" elsewhere in the loop - everything's tied to some kind of clock, whether it's the CPU clock, the rendered frame, turn-taking, or an abstracted timer. These kinds of bugs are a matter of bad specification, rather than bad implementation, so they resist automated testing mightily.

Since day one of the project, the core feature was to be able to run headless automated gameplay tests. That is, input and output are isolated by design. Every "game feature" (GF) I develop comes with automated tests; each such test starts up a minimal game core with fake (or null) input and output, the GF under test, and all GFs on which it depends, and then executes faked scenarios. So far, at least for minor things, it works out OK. I expect I might hit a wall when there are enough interacting GFs that I won't be able to correctly map desired scenarios to actual event execution orders. We'll see what happens when I reach that point.

> that's the kind of thing where you could theoretically use SQLite and have a very flexible runtime data model with a robust query system - but fully exploiting it wouldn't have the level of performance that's expected for a game.

Funny you should mention that.

The other big weird thing about this project is that it uses SQLite for runtime game state. That is, entities are database rows, components are database tables, and the canonical gameplay state at any given point is stored in an in-memory SQLite database. This makes saving/loading a non-issue - I just use SQLite's Backup API to dump the game state to disk, and then read it back.

Performance-wise, I tested this approach extensively up front, by timing artificial reads and writes in expected patterns, including simulating a situation in which I pull map and entities data in a given range to render them on screen. SQLite turned out to be much faster than I expected. On my machine, I could easily get 60FPS out of that with minimum optimization work - but it did consume most of the frame time. Given that I'm writing a ASCII-style, turn(ish) roguelike, I don't actually need to query all that data 60 times per second, so this is quite acceptable performance - but I wouldn't try that with a real-time game.

> The other thing that might help is to have a language that actually understands that you want to do this decoupling and has the tooling built in to do constraint logic programming and enforce the "musts" and "cannots" at source level. I don't know of a language that really addresses this well for the use case of game loops - it entails having a whole general-purpose language already and then also this other feature. Big project.

Or a Lisp project. While I currently do constraint resolution at runtime, it's not hard to move it to compile time. I just didn't bother with it yet. Nice thing about Common Lisp is that the distinction between "compilation/loading" and "runtime" is somewhat arbitrary - any code I can execute in the latter, I can execute in the former. If I have a function that resolves constraints on some data structure and returns a sequence, and that data structure can be completely known at compile time, it's trivial to have the function execute during compilation instead.

> I've been taking the approach instead of aiming to develop "little languages" that compose well for certain kinds of features

I'm interested in learning more about the languages you developed - e.g. how your FSMs are encoded, and what that "programmable painter system" looks like. In my project, I do little languages too (in fact, the aforementioned "game features" are a DSL themselves) - Lisp makes it very easy to just create new DSLs on the fly, and to some extent they inherit the tooling used to power the "host" language.

lxdesk · 4 years ago
Sounds like you may be getting close to an ideal result, at least for this project! :) Nice on the use of SQLite - I agree that it's right in the ballpark of usability if you're just occasionally editing or doing simple turn-taking.

When you create gameplay tests, one of the major limitations is in testing data. Many games end up with "playground" levels that validate the major game mechanics because they have no easier way of specifying what is, in essence, a data bug like "jump height is too short to cross gap". Now, of course you can engineer some kind of test, but it starts to become either a reiteration of the data (useless) or an AI programming problem that could be inverted into "give me the set of values that have solutions fitting these constraints" (which then isn't really a "test" but a redefinition of the medium, in the same way that a procedural level is a "solution" for a valid level).

It's this latter point that forms the basis of many of the "little languages". If you hardcode the constraints, then more of the data resides in a sweet spot by default and the runtime is dealing with less generality, so it also becomes easier to validate. One of my favorite examples of this is the light style language in Quake 1: https://quakewiki.org/wiki/lightstyle

It's just a short character string that sequences some brightness changes in a linear scale at a fixed rate. So it's "data," but it's not data encoded in something bulky like a bunch of floating point values. It's of precisely the granularity demanded by the problem, and much easier to edit as a result.

A short step up from that is something like MML: https://en.wikipedia.org/wiki/Music_Macro_Language - now there is a mostly-trivial parsing step involved, but again, it's "to the point" - it assumes features around scale and rhythm that allow it to be compact. You can actually do better than MML by encoding an assumption of "playing in key" and "key change" - then you can barf nearly any sequence of scale degrees into the keyboard and it'll be inoffensive, if not great music. Likewise, you could define rhythm in terms of rhythmic textures over time - sparse, held, arpeggiated, etc. - and so not really have to define the music note by note, making it easy to add new arrangements.

With AI, a similar thing can apply - define a tighter structure and the simpler thing falls out. A lot of game AI FSMs will follow a general pattern of "run this sequenced behavior unless something of a higher priority interrupts it". So encode the sequence, then hardcode the interruption modes, then figure out if they need to be parameterized into e.g. multiple sequences, if they need to retain a memory scratchpad and resume, etc. A lot of the headache of generalizing AI is in discovering needs for new scratchpads, if just to do something like a cooldown timer on a behavior or to retain a target destination. It means that your memory allocation per entity is dependent on how smart they have to be, which depends on the AI's program. It's not so bad if you are in something as dynamic as a Lisp, but problematic in the typical usages of ECS where part of the point is to systematize memory allocation.

With painting what you're looking for is a structuring metaphor for classes of images. Most systems of illustration have structuring metaphors of some kind specifically for defining proportions - they start with simple ratios and primitive shapes, and then use those as the construction lines for more detailed elements which subdivide the shapes again with another set of ratios. This is the conceptual basis of the common "6-8 heads of height" tip used in figure drawing - and there are systems of figure drawing which get really specific about what shapes to draw and how. If I encode such a system, I therefore have a method of automatic illustration that starts not with the actual "drawing" of anything, but with a proportion specification creating construction lines, which are then an input to a styling system that defines how to connect the lines or superimpose other shapes. Something I've been experimenting with to get those lines is a system that works by interpolation of coordinate transforms that aggregate a Cartesian and polar system together - e.g. I want to say "interpolate along this Cartesian grid, after it's been rotated 45 degrees". It can also perform interpolation between two entirely different coordinate systems(e.g. the same grid at two different scales). I haven't touched it in a while, but it generates interesting abstract animations, and I have a vision for turning that into a system for specifying character mannequins, textures, etc. Right now it's too complex to be a good one-liner system, but I could get there by building tighter abstractions on it in the same way as the music system.

My main thing this year has been a binary format that lets me break away from text encodings as the base medium, and instead have more precise, richer data types as the base cell type. This has gone through a lot of iteration to test various things I might want to encode and forms I could encode them in. The key thing I've hit on is to encode with a lot of "slack" in the system - each "cell" of data is 16 bytes; half of that is a header that contains information about how to render it, its ID in a listing of user named types, bitflags defined by the type, a "feature" value(an enumeration defined by the type), and a version field which could be used for various editing features. The other half is a value, which could be a 64-bit value, 8 bytes, a string fragment, etc. - the rendering information field indicates what it is in those general terms, but the definite meaning is named by the user type. The goal is to use this as a groundwork to define the little languages further - rather than relying on "just text" and sophisticated parsing, the parse is trivialized by being able to define richer symbols - and then I can provide more sophisticated editing and visualization more easily. Of course, I'm placing a bet on either having an general-purpose editor for it that's worthwhile, or being able to define custom editors that trivialize editing, neither of which might pan out; there's a case for either "just text" or "just spreadsheets" still beating my system. But I'd like to try it, since I think this way of structuring the bits is likely to be more long-run sustainable.

lxdesk commented on It's probably time to stop recommending Clean Code (2020)   qntm.org/clean?tw=... · Posted by u/avinassh
TeMPOraL · 4 years ago
> Put logic closest to where it needs to live (feature folders)

Can you say more about this?

I think I may have stumbled on a similar insight myself. In a side project (a roguelike game), I've been experimenting with a design that treats features as first-class, composable design units. Here is a list of the subfolder called game-features in the source tree:

  actions
  collision
  control
  death
  destructibility
  game-feature.lisp
  hearing
  kinematics
  log
  lore
  package.lisp
  rendering
  sight
  simulation
  transform
An extract from the docstring of the entire game-feature package:

  "A Game Feature is responsible for providing components, events,
  event handlers, queries and other utilities implementing a given
  aspect of the game. It's primarily a organization tool for gameplay code.
  
  Each individual Game Feature is represented by a class inheriting
  from `SAAT/GF:GAME-FEATURE'. To make use of a Game Feature,
  an object of such class should be created, preferably in a
  system description (see `SAAT/DI').
  This way, all rules of the game are determined by a collection of
  Game Features loaded in a given game.
  
  Game Features may depend on other Game Features; this is represented
  through dependencies of their classes."
The project is still very much work-in-progress (procrastinating on HN doesn't leave me much time to work on it), and most of the above features are nowhere near completion, but I found the design to be mostly sound. Each game feature provides code that implements its own concerns, and exports various functions and data structures for other game features to use. This is an inversion of traditional design, and is more similar to the ECS pattern, except I bucket all conceptually related things in one place. ECS Components and Systems, utility code, event definitions, etc. that implement a single conceptual game aspect live in the same folder. Inter-feature dependencies are made explicit, and game "superstructure" is designed to allow GFs to wire themselves into appropriate places in the event loop, datastore, etc. - so in game startup code, I just declare which features I want to have enabled.

(Each feature also gets its set of integration tests that use synthetic scenarios to verify a particular aspect of the game works as I want it to.)

One negative side effect of this design is that the execution order of handlers for any given event is hard to determine from code. That's because, to have game features easily compose, GFs can request particular ordering themselves (e.g. "death" can demand its event handler to be executed after "destructibility" but before "log") - so at startup, I get an ordering preference graph that I reconcile and linearize (via topological sorting). I work around this and related issues by adding debug utilities - e.g. some extra code that can, after game startup, generate a PlantUML/GraphViz picture of all events, event handlers, and their ordering.

(I apologize for a long comment, it's a bit of work I always wanted to talk about with someone, but never got around to. The source of the game isn't public right now because I'm afraid of airing my hot garbage code.)

lxdesk · 4 years ago
I've gone down roads similar to this. Long story short - the architecture solves for a lower priority class of problem, w/r to games, so it doesn't pay a great dividend, and you add a combination of boilerplate and dynamism that slows down development.

Your top issue in the runtime game loop is always with concurrency and synchronization logic - e.g. A spawns before B, if A's hitbox overlaps with B, is the first frame that a collision event occurs the frame of spawning or one frame after? That's the kind of issue that is hard to catch, occurs not often, and often has some kind of catastrophic impact if handled wrongly. But the actual effect of the event is usually a one-liner like "set a stun timer" - there is nothing to test with respect to the event itself! The perceived behavior is intimately coupled to when its processing occurs and when the effects are "felt" elsewhere in the loop - everything's tied to some kind of clock, whether it's the CPU clock, the rendered frame, turn-taking, or an abstracted timer. These kinds of bugs are a matter of bad specification, rather than bad implementation, so they resist automated testing mightily.

The most straightforward solution is, failing pure functions, to write more inline code(there is a John Carmack posting on inline code that I often use as a reference point). Enforce a static order of events as often as possible. Then debugging is always a matter of "does A happen before B?" It's there in the source code, and you don't need tooling to spot the issue.

The other part of this is, how do you load and initialize the scene? And that's a data problem that does call for more complex dependency management - but again, most games will aim to solve it statically in the build process of the game's assets, and reduce the amount of game state being serialized to save games, reducing the complexity surface of everything related to saves(versioning, corruption, etc). With a roguelike there is more of an impetus to build a lot of dynamic assets(dungeon maps, item placements etc.) which leads to a larger serialization footprint. But ultimately the focus of all of this is on getting the data to a place where you can bring it back up and run queries on it, and that's the kind of thing where you could theoretically use SQLite and have a very flexible runtime data model with a robust query system - but fully exploiting it wouldn't have the level of performance that's expected for a game.

Now, where can your system make sense? Where the game loop is actually dynamic in its function - i.e. modding APIs. But this tends to be a thing you approach gradually and grudgingly, because modders aren't any better at solving concurrency bugs and they are less incentivized to play nice with other mods, so they will always default to hacking in something that stomps the state, creating intermittent race conditions. So in practice you are likely to just have specific feature points where an API can exist(e.g. add a new "on hit" behavior that conditionally changes the one-liner), and those might impose some generalized concurrency logic.

The other thing that might help is to have a language that actually understands that you want to do this decoupling and has the tooling built in to do constraint logic programming and enforce the "musts" and "cannots" at source level. I don't know of a language that really addresses this well for the use case of game loops - it entails having a whole general-purpose language already and then also this other feature. Big project.

I've been taking the approach instead of aiming to develop "little languages" that compose well for certain kinds of features - e.g. instead of programming a finite state machine by hand for each type of NPC, devise a subcategory of state machines that I could describe as a one-liner, with chunks of fixed-function behavior and a bit of programmability. Instead of a universal graphics system, have various programmable painter systems that can manipulate cursors or selections to describe an image. The concurrency stays mostly static, but the little languages drive the dynamic behavior, and because they are small, they are easy to provide some tooling for.

lxdesk commented on Ask HN: How to get started with audio programming?    · Posted by u/Flex247A
lxdesk · 4 years ago
Implement a MIDI 1.0 sequencer and get it to play back SMF files with a sine wave synth - you can have it output a WAV file, or learn an API to do realtime rendering. It's not a large spec, there are lots of old documents on how the protocol functions in practice, and then once you start getting it working you'll get results instantly(lots of SMF files around to test with) but will want more features, better synthesis; complications start to arise and you will then pick up a lot of knowledge by doing.
lxdesk commented on Filecoin, StorJ and the problem with decentralized storage (2019)   randomoracle.wordpress.co... · Posted by u/arthur2e5
omginternets · 4 years ago
Well put. Thanks for putting words on that.

The confusion between structurelessness and decentralization is interesting in and of itself, though. It seems like it is exactly this conflation that prevents people from asking the interesting question: when does a centrally-organized social system (e.g. a business) benefit from a decentralized protocol?

It seems like this should have something to do with usage/access patterns within the business, yet we never hear about this.

lxdesk · 4 years ago
Here's an example: Cash decentralizes credit.

"Cash and carry" grocery outlets were a 20th century innovation. [0] Before that, the norm was to have a line of credit with the business, and in many cases, to accept their terms for delivery. Cash transactions anonymize, since settlement is done at the counter. You don't have to assess the buyer, you just need to verify the bills and change are real.

However, that isn't the entire story. While there were businesses using cash before this, they faced difficulties with accounting, supply logistics, and other elements that made it hard to conceive of something like a "supermarket", carrying a vast variety and quantity of goods on a daily basis. So then we have to look at all the pieces that fell into place to make it possible.

The automobile decentralized access to mobility, making carry-away a real possibility for more people, and thus making it possible to apply cash-and-carry in more places. Supporting elements like cash registers and refrigeration were becoming mature enough to support new forms of retail and allow more parts of the transaction to be delegated to local outlets and low-wage employees. The inter-war years really saw a whole set of technological innovations that were used in combinations to propel social changes and different categories of business(e.g. fast food), many of them decentralized in some respects but centralized in others - supermarket chains, as opposed to local markets.

These are the kinds of changes that are hardest to assess in full; when you decentralize one thing, centralization is "squeezed" into other parts of the economy, it seems. The obvious example for this phenomenon is Amazon, leveraging an apparently decentralizing mechanism(online retail - premised on an internet with sufficient bandwidth and security to list goods and take payments) into becoming the world's largest retailer. So it's centralized on one axis, but decentralized on others - a buyer no longer has to go to a particular physical location to purchase something, when all of it can be delivered to the doorstep.

[0] https://en.wikipedia.org/wiki/Cash_and_carry_(wholesale)

lxdesk commented on 20-Minute Neighborhoods   theconversation.com/peopl... · Posted by u/simonebrunozzi
asdff · 4 years ago
I think the worst part is while it is easy to see how this came to be, I don't know how you put it back. It's like saying how do you unburn down a forest. You can't put it back together. One shop opening in the otherwise empty commercial corridor on main street is going to look failing and close down with the rest of the block remaining vacant. Local planners then turn to razing historic commercial blocks to turn them into some cookie cutter chain since no one local has any capital anymore to start a business.
lxdesk · 4 years ago
I think the probable reversal of the cycle is:

1. Towns revise their taxation and zoning laws so that more classes of business are permitted in residences. They also start issuing more forms of local credit(the technical means to do so are only getting better), excluding big-box participation and restarting the cycle of capital accumulation locally.

2. Costs are lower and incentives are now aligned for more small businesses to survive in marginal areas.

3. Big-box stores increasingly become commodified and unbundled, themselves; the shift from Main Street to Wal-Mart to Amazon is one of the warehouse turning into a store and then back into a warehouse. The services of shipping logistics and delivery become less of a centralized process. Now the local businesses are using the big-box to their benefit.

The reason why Wal-Mart succeeds is ultimately premised on policies that let capital centralize itself according to a national and global framework. But that's only one way of "seeing" the economy, since following that policy, as we know, creates a mix of expensive star cities and dying no-hope towns. It's improbable that the future will simply be a restatement of the post-1970 trends, given what we know about history - something will change.

lxdesk commented on What is an NFT from an artist’s perspective?   alex-pardee.medium.com/wh... · Posted by u/jds375
chpmrc · 4 years ago
Not sure about the first point, after all with digital art what matters is the NFT since anyone can copy the artifact 1:1 at virtually no cost and no risk. I agree with many that NFTs are best suited for digital worlds but that begs the question: what happens if the game is centralized? They can still prevent you from actually having or using that item, regardless of any proof of ownership.

Re: wash trading, that also happens in the art world with shell companies.

lxdesk · 4 years ago
Use value is easy to assign to an NFT post-facto. That hasn't been really done in the current market(which is, of course, in the midst of a bubble), but:

* Tokens can become tickets to events

* Tokens can become options on commissioned work

* Tokens can become signs of membership

Because the token is guaranteed to be unique, and you can track ownership, there's a fluidity to this that lets you do away with contractual mechanisms. You can reuse the same tokens many times or announce that it will expire(for your use case).

Edit: And platforms can't really own it if it's on a public chain, too. You just copy the chain(see: BinancePunks copying CryptoPunks). So there's that.

u/lxdesk

KarmaCake day114May 6, 2020View Original