AboutSQL Server, Analytics, .Net, Machine Learning, R, Python Archives
About Me
Mitch Wheat has been working as a professional programmer since 1984, graduating with a honours degree in Mathematics from Warwick University, UK in 1986. He moved to Perth in 1995, having worked in software houses in London and Rotterdam. He has worked in the areas of mining, electronics, research, defence, financial, GIS, telecommunications, engineering, and information management. Mitch has worked mainly with Microsoft technologies (since Windows version 3.0) but has also used UNIX. He holds the following Microsoft certifications: MCPD (Web and Windows) using C# and SQL Server MCITP (Admin and Developer). His preferred development environment is C#, .Net Framework and SQL Server. Mitch has worked as an independent consultant for the last 10 years, and is currently involved with helping teams improve their Software Development Life Cycle. His areas of special interest lie in performance tuning |
Sunday, May 22, 2011Large Object Heap and Arrays of DoubleIf you were asked where objects greater than or equal to 85,000 bytes are allocated in .NET, you would no doubt say on the Large Object Heap (LOH). What would you say if you were asked where an array of 1000 doubles would be allocated? Currently, as of .NET 4.0, it will be allocated on the Large Object Heap! William Wegerson (aka OmegaMan), a C# MVP, posted this item to Connect: Large Object Heap (LOH) does not behave as expected for Double array placement that describes and reproduces the behaviour: byte[] arrayLessthan85K = new byte[84987]; // Note: 12 byte object overhead 84987 + 12 = 84999 By looking at the garbage collection generation on object creation (GC.GetGeneration), we can identify if objects reside the LOH or not. If immediately created in generation 2 then that suggests we are in the LOH. The reason why double arrays with 1000 or more items are allocated on the LOH is performance, due to the fact that the LOH is aligned on 8 byte boundaries. This allows faster access to large arrays and the trade-off point was determined to be 1 thousand doubles. According to Claudio Caldato, CLR Performance and GC Program Manager, “there’s no benefit to applying this heuristic on 64-bit architectures because doubles are already aligned on an 8-byte boundary”. Subsequent changes have been made to this heuristic that should appear in a future release of the .NET Framework. As a side note, the expected behaviour is seen if you use Array.CreateInstance() : // As noticed by @Romout in the comments to that post, the same behaviour is not seen when using Array.CreateInstance() 2 Comments:Thanks Mitch! I didn't know that one. By Unknown, at May 23, 2011 6:23 pm I updated my blog post Tribal Knowledge: Large Object Heap and Double Arrays in .Net to reflect that this has not changed in .Net 4; but it should have because 64 bit operations are already aligned. Check out my blog for the response that MS gave! By William Wegerson, at May 25, 2011 5:57 am << Home |
ContactMSN, Email: mitch døt wheat at gmail.com LinksFavorites
Blogs |