Show simple item record

dc.contributor.authorPotter, R.
dc.date.accessioned2010-12-15T18:53:02Z
dc.date.available2010-12-15T18:53:02Z
dc.date.issued1996
dc.identifier.citationPotter , R 1996 , Investigating the limits of instruction level parallelism . UH Computer Science Technical Report , vol. 245 , University of Hertfordshire .
dc.identifier.otherPURE: 100182
dc.identifier.otherPURE UUID: 8c2d4aae-946c-4c55-97e2-ec5c44338fc8
dc.identifier.otherdspace: 2299/5078
dc.identifier.urihttp://hdl.handle.net/2299/5078
dc.description.abstractHigh performance computer architectures increasingly use compile-time instruction scheduling to reorder code to expose parallelism that can be exploited at run-time. Although respectable performance increases have been reported, there is still a significant performance gap between what has been achieved and what has theoretically been shown to be possible. All scheduling algorithms used to reorder code, either explicitly or implicitly introduce barriers to code motion, which in turn limit the performance realised. Trace driven simulation is used to quantify the amount of instruction level parallelism available in general purpose code and the impact of various artificial barriers to code motion. This work is based on the Hatfield Superscalar Architecture, a progressive multiple instruction issue processor. The results of this study will be used to direct future developments in instruction scheduling technology.en
dc.language.isoeng
dc.publisherUniversity of Hertfordshire
dc.relation.ispartofseriesUH Computer Science Technical Report
dc.rightsOpen
dc.titleInvestigating the limits of instruction level parallelismen
dc.contributor.institutionSchool of Computer Science
dc.relation.schoolSchool of Computer Science
dcterms.dateAccepted1996
rioxxterms.typeOther
herts.preservation.rarelyaccessedtrue
herts.rights.accesstypeOpen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record