3.9 Article Proceedings Paper

Sirius: An Open End-to-End Voice and Vision Personal Assistant and Its Implications for Future Warehouse Scale Computers

Journal

ACM SIGPLAN NOTICES
Volume 50, Issue 4, Pages 223-238

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/2775054.2694347

Keywords

datacenters; warehouse scale computers; emerging workloads; intelligent personal assistants

Funding

  1. Google
  2. ARM
  3. Defense Advanced Research Projects Agency (DARPA) [HR0011-13-2-000]
  4. National Science Foundation (NSF) [CCF-SHF-1302682, CNS-CSR-1321047]

Ask authors/readers for more resources

As user demand scales for intelligent personal assistants (IPAs) such as Apple's Siri, Google's Google Now, and Microsoft's Cortana, we are approaching the computational limits of current datacenter architectures. It is an open question how future server architectures should evolve to enable this emerging class of applications, and the lack of an open-source IPA workload is an obstacle in addressing this question. In this paper, we present the design of Sirius, an open end-to-end IPA web-service application that accepts queries in the form of voice and images, and responds with natural language. We then use this workload to investigate the implications of four points in the design space of future accelerator-based server architectures spanning traditional CPUs, GPUs, manycore throughput co-processors, and FPGAs. To investigate future server designs for Sirius, we decompose Sirius into a suite of 7 benchmarks (Sirius Suite) comprising the computationally intensive bottlenecks of Sirius. We port Sirius Suite to a spectrum of accelerator platforms and use the performance and power trade-offs across these platforms to perform a total cost of ownership (TCO) analysis of various server design points. In our study, we find that accelerators are critical for the future scalability of IPA services. Our results show that GPU- and FPGA-accelerated servers improve the query latency on average by 10x and 16x. For a given throughput, GPU-and FPGAaccelerated servers can reduce the TCO of datacenters by 2.6x and 1.4x, respectively. [GRAPHICS] .

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.9
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available