Submit your answer by Dec 31.
Recently I reviewed the portion of my ATUM-compliant Cost model which estimates fully burdened physical server costs.
Here's an excerpt of the relevant cost model objects:
At the IT Resource Towers (ITRT) object level, I'm using TBM Taxonomy v2.1 (details here).
My IT Resource Towers object is backed by a data table containing 41 rows which correspond to each tower and sub-tower combination listed in the taxonomy.
As expected, my Data Center tower cost allocates from ITRT to Data Centers object.
Then it allocates from Data Centers to Physical Server object, weighted by # CPU cores per server.
(So for instance, a server with 8 cores receives twice as much Data Centers cost as a server with 4 cores.)
Also as expected, my Compute tower cost allocates from ITRT to Physical Server object, weighted by # CPU cores per server.
In my screenshot above, I have separate allocation lines for Unix and Windows, but I could combine these if I wanted to (by setting up an Operating System direct data reference between the two objects, to ensure cost does not get mixed between OS's).
But three issues weigh heavily (pun intended) on my mind:
1. Server depreciation cost allocates from Fixed Asset Ledger to ITRT to Physical Server, but since I weight solely by # CPU cores (with no allocation filters), some of this depreciation cost is probably being allocated to servers which are already fully depreciated, unfairly driving up their estimated cost.
2. Server depreciation cost (again, originating from Fixed Asset Ledger object) winds up allocating across multiple servers as weighted by # CPU cores, but the number of cores seems somewhat unrelated to the amount of depreciation each server should receive. I have many 8-core servers whose initial purchase price was lower than some of my 4-core servers, for example.
3. My data center power bill correctly rolls up through the model to ITRT to Data Centers to Physical Server object, and I understand that the majority of a server's power is used for its CPU(s). But different CPUs use different amounts of power, and besides, my server CPUs aren't 100% active all month long. Weighting data center power cost by # CPU cores per server therefore doesn't seem fully defensible.