4 | | Update: we put double-double (DD) precision performance. In this case, we used 2x2 block in this case. On Cypress architecture GPU, we take advantage of FMA_64 instruction. For MAD peak in DD, we assume one DD operation takes 20 DP operations(ops) without FMA and 15 ops with FMA. Precisely, DD add and DD mul without FMA takes ~ 20 ops while DD mul with FMA only takes ~ 8 ops. Even without FMA_64 instruction, we can use MULADD instruction to reduce op count in DD mul. On RV770, we have 13% better performance as indicated with the row with MAD. |
| 4 | Update: we put double-double (DD) precision performance. In this case, we used 2x2 block. On Cypress architecture GPU, we take advantage of FMA_64 instruction. For MAD peak in DD, we assume one DD operation takes 20 DP operations(ops) without FMA and 15 ops with FMA. Precisely, DD add and DD mul without FMA takes ~ 20 ops while DD mul with FMA only takes ~ 8 ops. Even without FMA_64 instruction, we can use MULADD instruction to reduce op count in DD mul. On RV770, we have 13% better performance as indicated with the row with MAD. |