Loading...
  OR  Zero-K Name:    Password:   

Post edit history

Zero-k Running on ONE core at a time!!!

To display differences between versions, select one or more edits in the list using checkboxes and click "diff selected"
Post edit history
Date Editor Before After
8/28/2015 2:46:36 AMGBrankTheEloIsALie before revert after revert
8/28/2015 2:42:02 AMGBrankTheEloIsALie before revert after revert
8/28/2015 2:41:09 AMGBrankTheEloIsALie before revert after revert
Before After
1 [quote]What's even more interesting is that callins may call code that activates other callins.[/quote] 1 [quote]What's even more interesting is that callins may call code that activates other callins.[/quote]
2 That's the thing I was wondering about. 2 That's the thing I was wondering about.
3 \n 3 \n
4 Aren't gadget writers either required to pay attention to threadability, or risk killing any threading benefit if they desperately require execution order restrictions though? Hm, gives me something to think about. 4 Aren't gadget writers either required to pay attention to threadability, or risk killing any threading benefit if they desperately require execution order restrictions though? Hm, gives me something to think about.
5 \n 5 \n
6 Well, here's my reasoning: 6 Well, here's my reasoning:
7 Suppose we have two units dying, each causing a lua callin which will have further impact. The order of the callins is sync-relevant. 7 Suppose we have two units dying, each causing a lua callin which will have further impact. The order of the callins is sync-relevant.
8 Making the result of the situation independent [i]from within the lua[/i] of the order of the callins seems unreasonable/impossible. Therefore, in the engine, either 8 Making the result of the situation independent [i]from within the lua[/i] of the order of the callins seems unreasonable/impossible. Therefore, in the engine, either
9 a) the unit deaths need to happen within the same thread on all clients (since this scenario could be extended to virtually all physics events, this essentially means no multithreading), or 9 a) the unit deaths need to happen within the same thread on all clients (since this scenario could be extended to virtually all physics events, this essentially means no multithreading), or
10 b) the order of the lua calls is made independent of the order of the events being processed (what I was talking about with "delaying" the calls). If the result of the lua stuff for the first unit changes what happens to the second unit, the engine might have to redo that calculation (I'm being horribly unspecific, right?) and only then perform the second lua callin. 10 b) the order of the lua calls is made independent of the order of the events being processed (what I was talking about with "delaying" the calls). If the result of the lua stuff for the first unit changes what happens to the second unit, the engine might have to redo that calculation (I'm being horribly unspecific, right?) and only then perform the second lua callin.
11 \n 11 \n
12 If you imagine this for 100 units killing each other one by one through a chain reaction [i]because of lua stuff[/i] happening for each unit death, then it obviously won't really be parallel, but in the same obviousness it never could be anyway. I have no idea how this case of back and forth between engine and lua is handled right now though. 12 If you imagine this for 100 units killing each other one by one through a chain reaction [i]because of lua stuff[/i] happening for each unit death, then it obviously won't really be parallel, but in the same obviousness it never could be anyway. In other words, if it takes 10 sequential communications between engine and lua to properly process a single event, then it'll take that many with multithreading, too, right ( again, assuming that lua is to be left out of the parallel business) ?