DATA-ORIENTED DESIGN The hardware will thank you.

If you wish to submit an article, please contact support@dataorienteddesign.com for details.


Lots of good resources linked from this site by Daniele Bartolini: Data Oriented Design Resources

Data-Oriented Design book (2018 version)- html with links

Data-Oriented Design book (2018 version) - PDF download (better images)

Data-Oriented Design book - 2018 paperback version (with extra chapters)

Chinese translation provided by KouZhe / Martin Cole

Data-Oriented Design book (2018 version)- html with links

Data-Oriented Design book (2018 version) - PDF download (better images)

FIRST PREV : PAGE 1 : NEXT LAST

Server downgrade 14/11/2012:21:06:13

Server is now a raspberry pi. Should be enough for the kind of traffic I see here, but if anyone has a job getting through then send an e-mail to support

Intrusive Linked Lists 10/09/2012:10:36:09

...again

but this time there's an interesting statement that needs to be investigated. In the section titled Why intrusive lists are better, there is an argument:

When traversing objects stored on an intrusive linked list, it only takes one pointer indirection to get to the object, compared to two pointer indirections for std::list. This causes less memory-cache thrashing so your program runs faster — particularly on modern processors which have huge delays for memory stalls.

Right

Think about it a bit more and you realise that this must in fact be false. Where are the pointers to the elements? They are in the middle of the structures being traversed. They're not somewhere all together, huddled up on cachelines, they're split apart by at least the size of the object they are linking. This means you save one indirection, but potentially at the cost of many loads from separate areas of memory which will cane your memory bandwidth.

Apart from that minor snaffet, the article is sound, but when you come across a false statement like that it does make you question the authenticity of the rest of the article. Foremost in your mind when you do any performance related work must be profiling. If the author had done profiling he might have learned that this was the case and been able to improve on the design further, or at least realise that there was a trade-off and with that knowledge be better armed.

http://www.codeofhonor.com/blog/avoiding-game-crashes-related-to-linked-lists?utm_source=rss&utm_medium=rss&utm_campaign=avoiding-game-crashes-related-to-linked-lists

Keep your types separated from the functions that operate on them 6/09/2012:00:53:34

Keeping your nouns separate from your verbs can be handy, and in this case it's being used to decrease coupling in the physical layout of code.
http://www.altdevblogaday.com/2012/09/03/a-new-way-of-organizing-header-files/

Memory Access Patterns Are Important 8/08/2012:21:36:18

"In high-performance computing it is often said that the cost of a cache-miss is the largest performance penalty for an algorithm."
http://mechanical-sympathy.blogspot.co.uk/2012/08/memory-access-patterns-are-important.html

interesting algorithm of the week 8/08/2012:21:25:57

Burrows Wheeler transform presented with an interactive demonstration of how it works.
http://blog.avadis-ngs.com/2012/04/elegant-exact-string-match-using-bwt-2/

Booleans as parameters can be seen as a code-smell 23/07/2012:10:57:26

Boolean parameters are usually used to control code flow inside a function from without. With no further information it should be a simple case of deduction that this is an unnecessary waste of time in almost all cases. If the code flow is meant to be controlled from without, then why not introduce two different functions. If there are multiple boolean switches of code flow, then it's probably true that the callee does too much.

Booleans as arguments are an alternative to having a function do what it says it does. The Ronseal-rule, a very good rule, forbids this.
http://ernstsson.net/post/27787949222/boolean-parameter-elimination

Reconsider the layout of your data 23/07/2012:08:37:15

If you put your data into classes, then you're limited to the basic structures available, namely fields as constants in your classes. Runtime changes to the structure of a class are very difficult to achieve with C++ without invoking arcane and hard to debug techniques. With blobs, or a simpler access pattern such as a free function to get a variable, then you can reimagine the data structures in new ways.
http://simblob.blogspot.co.uk/2012/07/playing-with-dot-operator.html

MVC is dead... 04/07/2012:12:41:12

Sometimes, reinterpretation can bring old ideas back to be fully realised. The MVC pattern is one of the good design patterns because it promoted separation of state from interpretation and action. MOVE is maybe a more clear interpretation of what MVC aims to provide.
http://cirw.in/blog/time-to-move-on

An example of how branch prediction affects your code 28/06/2012:12:52:55

This is a really good example of how understanding the shape of your data can help you make good decisions about your code.
http://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-an-unsorted-array

Reducing distance data travels by moving the CPU near the memory 18/06/2012:21:25:26

If you're trying to reduce the amount of energy used getting data to your CPUs, then maybe this idea will work out better for efficient data movement.

Whether you're talking about high performance computers, enterprise servers, or mobile devices, the two biggest impediments to application performance in computing today are the memory wall and the power wall. Venray Technology is aiming to knock down those walls with a unique approach that puts CPU cores and DRAM on the same die. The company has been in semi-stealth mode since it inception seven years ago, but is now trying to get the word out about its technology as it searches for a commercial buyer.

http://www.hpcwire.com/hpcwire/2012-01-17/designer_of_microprocessor-memory_chip_aims_to_topple_memory_and_power_walls.html

Far Cry 3 - SPU effectiveness 18/06/2012:13:40:13

engineroom.ubi.com/the-spus-are-hungry-maximizing-spu-efficiency-on-far-cry-3

In Praise of Idleness 18/06/2012:13:25:56

Check out this article by Bruce Dawson on the many various types of waiting. www.altdevblogaday.com/.../in-praise-of-idleness

Adding intrinsics to dart. 18/06/2012:13:25:04

It's never too late to add intrinsics to a language. If we're to continue using non-native languages in our browsers, adding in SIMD and other hardware oriented features will save us some energy / money. John McCutchan has done just this : bringing-simd-accelerated-vector-math

An introduction to lock-free programming 18/06/2012:12:05:34

Preshing writes well and about subjects that matter. See his post on what lock-free really means and how to get started here.

Things you must read. 13/06/2012:16:04:43

  • DATA-ORIENTED DESIGN by Noel Llopis.
    The game developer magazine article published in September 2009 that started it all.
  • Pitfalls of Object Oriented Programming by Tony Albrecht
    The PDF of slides that woke many game developers up to the potential problems being caused by continuing the trend of more and more Object-oriented code without considering the hardware.
  • Typical C++ bullshit by Mike Acton
    An interesting take on how to make a slide presentation with equally interesting content. Be sure to check out the rest of his smug mug gallery of tips on concurrent software design.

Data movement is what matters, not arithmetic 12/06/2012:16:31:56

November 1, 2006 lecture by William Dally for the Stanford University Computer Systems Colloquium (EE 380).

A discussion about the exploration of parallelism and locality with examples drawn from the Imagine and Merrimac projects and from three generations of stream programming systems.

Andrew Richards talks about OpenCL's future. 12/06/2012:16:05:10

semiaccurate.com/.../andrew-richards-talks-about-opencls-future

The Web Will Die When OOP Dies 10/06/2012:21:41:21

Zed Shaw of http://programming-motherfucker.com/ presents a great talk about the web and how OOP is causing pain.
http://vimeo.com/43380467#

not just the shape or content of the data 06/05/2012:21:05:54

but also how it gets there.

http://www.altdevblogaday.com/2012/05/16/sql-server-high-performance-inserts/

input is data 04/05/2012:21:07:54

Very interesting read, but one of the understated takeaways is that playing a game is data. Playing a game means generating data from data. Lekktor took this principle to the extreme, it took the input of the player as the data by which is decided how to morph the code.

It's crazy, but it's something to think about. The data in the form of the player input counts. Player input was used to measure code coverage, and to some extent, this is why automated bot tests can return you bogus performance profiles. If using Lekktor was taken for granted, what would be necessary to make it not a crazy idea?

The first step could be to introduce unit tests of a different sort. For everything that the game can do, the unit test would make the game do it so Lekktor wouldn't forget about it. If someone finds a feature missing from the final game, then you missed a unit test. Also knowing that Lekktor won't let your code live without a unit test would provoke you into writing said test, which wouldn't be a bad thing at all now would it?

There are some other things to think about too: If a player is unlikely to do something, then we all know it's more likely to be buggy because it's less likely to be tested, but also, things that are less likely deserve less developer time. In turn this allows us to make trade offs. For example, it's seen as quite natural to ignore fixing framerate issues from areas that the player is unlikley to see in favour of fixing the framerate in areas where the player is highly likely to see. Lekktor allows us another view of the code. It can tell us what areas of the code are used little, and from that we can deduce what areas are potentially more dangerous than others.

During development, it's important to have all the optional but not actually used code paths available, but in a final build, not just the debugging code should be eradicated, but also all the code that was only used by the debug code. Lekktor could potentially be that tool, but only after all the crazy is taken out.

http://fgiesen.wordpress.com/2012/04/08/metaprogramming-for-madmen/

A slow realisation 01/05/2012:23:07:37

Chris Turner reveals how he realised over a number of years that the advertised features of Object Oriented design don't quite match up with the reality while taking on a more and more functional approach to development. http://skipoleschris.blogspot.co.uk/2012/04/life-without-objects.html

When you can change to match the data, you can be more efficient. 17/04/2012:15:07:50

www.hpcwire.com/.../latest_fpgas_show_big_gains_in_floating_point_performance

remember time then doing analysis of space 08/04/2012:12:12:31

Some elements or development have time and space tied to each other in such a literal way that's it hard to think of a reason not to worry about both at the same time.

Take asset compression.

For physical media, there is a certain amount of compression required in order to fit your game on a disc. Without any compression at all, many games would either take up multiple discs, or just not have much content. With compression, load times go down and processor usage goes up. But, how much compression is wanted? There are some extremely high compression ratio algorithms around that would allow some multiple DVD titles to fit on one disc, but would it be worth investing the time and effort in them?

The only way to find out is to look at the data, in this case, the time it takes to load an asset from disc vs the time it takes to decompress it. If the time to decompress is less than the time to read, then it's normally safe to assume you can try a more complex compression algorithm, but that's only in the case where you have the compute resources to do the decompression.

Imagine a game where the level is loaded, and play commences. In this environment, the likelyhood that you would have a lot of compute resources available during the asset loading is very high indeed. Loading screens cover the fact that the CPUs/GPUs are being fully utilised in mitigating disc throughput limits.

Imagine a free roaming game where once you're in the world, there's no loading screens. In this environment, the likleyhood of good compute resources going spare is low, so decompression algorithms have to be light-weight and assets need to be built so that streaming content is simple and fast too.

Always consider how the data is going to be used, and also what state the system is in when it is going to use it. Testing your technologies in isolation is a sure fire way to give you a horrible crunch period where you try to stick it all together at the end.

Government of data 07/04/2012:16:26:27

The UK goverment website is promoting a new set of design principles that can just as well apply to many other forms of design where data needs to be understood in order to be consumed. Anyone creating tools to help visualise data in any field can take cues from this resource. The website itself attempts to conform to its own rules, minimising any effort on the part of the user, maintaining readability through scaling of glyphs and simple but active and non-intrusive elements in the page.

https://www.gov.uk/designprinciples

Separation of compute, control and transfer 21/03/2012:18:06:48

Every time you use an accumulator or a temporary variable, your potential for concurrency suddenly drops.

This short article goes over some of the higher level auto parallelising languages that attempt to leverage the power of GPGPU, but are hindered in scalability by their attempt to give the programmer what they are used to, not what they really need and no more.

http://www.streamcomputing.eu/blog/2012-03-21/separation-of-compute-control-and-transfer/
Mastodon