Saketh Rama
Bio
Saketh Rama received an A.B. from Harvard, where he is now a Ph.D. student in Computer Science. His current research focuses on parallel architectures and algorithms for machine learning and artificial intelligence.
Links
Publications
-
Reagen, B., Whatmough, P., Adolf, R., Rama, S., Lee, H., Lee, S. K., Hernández-Lobato, J. M., Wei, G.-Y., and Brooks, D. Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators. In ACM SIGARCH Computer Architecture News (2016), vol. 44, IEEE Press, pp. 267–278.
-
Adolf, R., Rama, S., Reagen, B., Wei, G.-Y., and Brooks, D. Fathom: Reference Workloads for Modern Deep Learning Methods. In 2016 IEEE International Symposium on Workload Characterization (IISWC), (2016), IEEE, pp. 1–10.
-
Whatmough, P. N., Lee, S. K., Lee, H., Rama, S., Brooks, D., and Wei, G.-Y. A 28nm SoC with a 1.2 GHz 568nJ/Prediction Sparse Deep-Neural-Network Engine with >0.1 Timing Error Rate Tolerance for IoT Applications. In 2017 IEEE International Solid-State Circuits Conference (ISSCC) (2017), IEEE, pp. 242–243.
-
Adolf, R., Rama, S., Reagen, B., Wei, G.-Y., and Brooks, D. The Design and Evolution of Deep Learning Workloads. IEEE MICRO, 1 (2017), 18–21.
© 2024 Saketh Rama