Frequency scaling for simulator debugging

While working on a recent paper, I stumbled on an easy way to find errors in a computer architecture simulator. I simulated a processor at various frequencies and plotted the resulting performance. I expected to see a nice “smooth” set of points, perhaps something like this:

Expected effect of frequency scaling

Aside: in my paper, I explain why looking at execution time vs. cycle time is easier than looking at performance vs. frequency.

Instead, I saw a big jump in performance at a certain frequency point. The plot I got looked more like this:

Simulated effect of frequency scaling

Naturally, I investigated. It turns out that this particular workload (lbm) generates a lot of writeback memory requests. At high frequencies, these requests saturate the capacity of the buffers holding them. So sometimes a writeback is generated without an available buffer to hold it. The person who wrote the writeback code (who shall remain unnamed) assumed that this scenario would not occur often and chose to simply drop the writeback. Well, this scenario does occur often when running lbm at high frequencies, causing many dropped writeback memory requests. In turn, the drop in memory requests reduces off-chip memory contention, causing an unexpected performance improvement—the big jump in the plot.

Of course, I fixed this simulation inaccuracy, and a few others. I also learned a new way to detect simulator errors.

Advertisement

About rustam

Computer architecture graduate student at The University of Texas at Austin.
This entry was posted in Simulation. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s