[go: up one dir, main page]

Uzi Vishkin (born 1953) is a computer scientist at the University of Maryland, College Park, where he is Professor of Electrical and Computer Engineering at the University of Maryland Institute for Advanced Computer Studies (UMIACS). Uzi Vishkin is known for his work in the field of parallel computing. In 1996, he was inducted as a Fellow of the Association for Computing Machinery, with the following citation: "One of the pioneers of parallel algorithms research, Dr. Vishkin's seminal contributions played a leading role in forming and shaping what thinking in parallel has come to mean in the fundamental theory of Computer Science."[1]

Uzi Vishkin
Born1953
Alma materHebrew University
Technion
Scientific career
Fieldsparallel algorithms
InstitutionsIBM Thomas J. Watson Research Center
New York University
Tel Aviv University
University of Maryland, College Park
Doctoral advisorYossi Shiloach

Biography

edit

Uzi Vishkin was born in Tel Aviv, Israel. He completed his B.Sc. (1974) and M.Sc. in Mathematics at the Hebrew University, before earning his D.Sc. in Computer Science at the Technion (1981). He then spent a year working at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York. From 1982 to 1984, he worked at the department of computer science at New York University and remained affiliated with it till 1988. From 1984 until 1997 he worked in the computer science department of Tel Aviv University, serving as its chair from 1987 to 1988. Since 1988 he is with the University of Maryland, College Park.

PRAM-on-chip

edit

A notable rudimentary abstraction—that any single instruction available for execution in a serial program executes immediately—made serial computing simple. A consequence of this abstraction is a step-by-step (inductive) explication of the instruction available next for execution. The rudimentary parallel abstraction behind the PRAM-on-chip concept, dubbed Immediate Concurrent Execution (ICE) in Vishkin (2011), is that indefinitely many instructions available for concurrent execution execute immediately. A consequence of ICE is a step-by-step (inductive) explication (also known as lock-step) of the instructions available next for concurrent execution. Moving beyond the serial von Neumann computer (the only successful general purpose platform to date), the aspiration of the PRAM-on-chip concept is that computer science will again be able to augment mathematical induction with a simple one-line computing abstraction. A chronological overview of the evolution of the PRAM-on-chip concept and its hardware and software prototyping follow. In the 1980s and 1990s, Uzi Vishkin co-authored several articles that helped building a theory of parallel algorithms in a mathematical model called parallel random access machine (PRAM), which is a generalization for parallel computing of the standard serial computing model random-access machine (RAM). The parallel machines needed for implementing the PRAM model have not yet been built at the time, and quite a few challenged the ability to ever build such machines. Concluding in 1997[2] that the transistor count on chip as implied by Moore's Law will allow building a powerful parallel computer on a single silicon chip within a decade, he developed a PRAM-On-Chip vision that called for building a parallel computer on a single chip that allows programmers to develop their algorithms for the PRAM model. He went on to invent the explicit multi-threaded (XMT) computer architecture that enables implementation of this PRAM theory, and led his research team to completing in January 2007 a 64-processor computer[3] named Paraleap,[4] that demonstrates the overall concept. The XMT concept was presented in Vishkin et al. (1998), Naishlos et al. (2003), the XMT 64-processor computer in Wen & Vishkin (2008), in Vishkin (2011) and most recently in Ghanim, Vishkin & Barua (2018), where it was shown that lock-step parallel programming (using ICE) can achieve the same performance as the fastest hand-tuned multi-threaded code on XMT systems. Such inductive lock-step approach stands in contrast to multi-threaded programming approaches of other many core systems that are known for challenging programmers. The demonstration of XMT comprised several hardware and software components, as well as teaching PRAM algorithms in order to program the XMT Paraleap, using a language called XMTC. Since making parallel programming easy is one of the biggest challenges facing computer science today, the demonstration also sought to include teaching the basics of PRAM algorithms and XMTC programming to students ranging from high-school to graduate school.

Parallel algorithms

edit

In the field of parallel algorithms, Uzi Vishkin co-authored the paper Shiloach & Vishkin (1982b) that contributed the work-time (WT) (sometimes called work-depth) framework for conceptualizing and describing parallel algorithms. The WT framework was adopted as the basic presentation framework in the parallel algorithms books JaJa (1992) and Keller, Kessler & Traeff (2001), as well as in the class notes Vishkin (2009). In the WT framework, a parallel algorithm is first described in terms of parallel rounds. For each round, the operations to be performed are characterized, but several issues can be suppressed. For example, the number of operations at each round need not be clear, processors need not be mentioned and any information that may help with the assignment of processors to jobs need not be accounted for. Second, the suppressed information is provided. The inclusion of the suppressed information is, in fact, guided by the proof of a scheduling theorem due to Brent (1974). The WT framework is useful since while it can greatly simplify the initial description of a parallel algorithm, inserting the details suppressed by that initial description is often not very difficult. Similarly, first casting an algorithm in the WT framework can be very helpful for programming it in XMTC. Vishkin (2011) explains the simple connection between the WT framework and the more rudimentary ICE abstraction noted above.

In the field of parallel and distributed algorithms, one of the seminal papers co-authored by Uzi Vishkin is Cole & Vishkin (1986). This work introduced an efficient parallel technique for graph coloring. The Cole–Vishkin algorithm finds a vertex colouring in an n-cycle in O(log* n) synchronous communication rounds. This algorithm is nowadays presented in many textbooks, including Introduction to Algorithms by Cormen et al.,[5] and it forms the basis of many other distributed algorithms for graph colouring.[6]

Other contributions by Uzi Vishkin and various co-authors include parallel algorithms for list ranking, lowest common ancestor, spanning trees, and biconnected components.

Selected publications

edit
  • Shiloach, Yossi; Vishkin, Uzi (1982a), "An O(log n) parallel connectivity algorithm", Journal of Algorithms, 3: 57–67, doi:10.1016/0196-6774(82)90008-6.
  • Shiloach, Yossi; Vishkin, Uzi (1982b), "An O(n2 log n) parallel max-flow algorithm", Journal of Algorithms, 3 (2): 128–146, doi:10.1016/0196-6774(82)90013-X.
  • Mehlhorn, Kurt; Vishkin, Uzi (1984), "Randomized and deterministic simulations of PRAMs by parallel machines with restricted granularity of parallel memories", Acta Informatica, 21 (4): 339–374, doi:10.1007/BF00264615, S2CID 29789494.
  • Tarjan, Robert; Vishkin, Uzi (1985), "An efficient parallel biconnectivity algorithm", SIAM Journal on Computing, 14 (4): 862–874, CiteSeerX 10.1.1.465.8898, doi:10.1137/0214061, S2CID 7231609.
  • Vishkin, Uzi (1985), "Optimal parallel pattern matching in strings", Information and Control, 67 (1–3): 91–113, doi:10.1016/S0019-9958(85)80028-0.
  • Cole, Richard; Vishkin, Uzi (1986), "Deterministic coin tossing with applications to optimal parallel list ranking", Information and Control, 70 (1): 32–53, doi:10.1016/S0019-9958(86)80023-7.
  • Vishkin, Uzi; Dascal, Shlomit; Berkovich, Efraim; Nuzman, Joseph (1998), "Explicit Multi-Threading (XMT) bridging models for instruction parallelism", Proc. 1998 ACM Symposium on Parallel Algorithms and Architectures (SPAA), pp. 140–151.
  • Naishlos, Dorit; Nuzman, Joseph; Tseng, Chau-Wen; Vishkin, Uzi (2003), "Towards a First Vertical Prototyping of an Extremely Fine-Grained Parallel Programming Approach" (PDF), Theory of Computing Systems, 36 (5): 551–552, doi:10.1007/s00224-003-1086-6, S2CID 1929495.
  • Wen, Xingzhi; Vishkin, Uzi (2008), "FPGA-based prototype of a PRAM-on-chip processor", Proc. 2008 ACM Conference on Computing Frontiers (Ischia, Italy) (PDF), pp. 55–66, doi:10.1145/1366230.1366240, ISBN 978-1-60558-077-7, S2CID 11557669.
  • Vishkin, Uzi (January 2011), "Using simple abstraction to reinvent computing for parallelism", Communications of the ACM, 54 (1): 75–85, doi:10.1145/1866739.1866757, S2CID 10279904.
  • Ghanim, Fady; Vishkin, Uzi; Barua, Rajeev (February 2018), "Easy PRAM-Based High-Performance Parallel Programming with ICE", IEEE Transactions on Parallel and Distributed Systems, 29 (2): 377–390, doi:10.1109/TPDS.2017.2754376, hdl:1903/18521.

Notes

edit
  1. ^ ACM: Fellows Award / Uzi Vishkin.
  2. ^ Vishkin, Uzi. Spawn-join instruction set architecture for providing explicit multithreading. U.S. Patent 6,463,527. See also Vishkin et al. (1998).
  3. ^ University of Maryland, press release, June 26, 2007: "Maryland Professor Creates Desktop Supercomputer" Archived 2009-12-14 at the Wayback Machine.
  4. ^ University of Maryland, A. James Clark School of Engineering, press release, November 28, 2007: "Next Big "Leap" in Computing Technology Gets a Name".
  5. ^ 1st ed., Section 30.5.
  6. ^ See, e.g., Goldberg, Plotkin & Shannon (1988).

References

edit

This survey paper cites 16 papers co-authored by Vishkin

Cites 36 papers co-authored by Vishkin

  • Karp, Richard M.; Ramachandran, Vijaya (1988), "A Survey of Parallel Algorithms for Shared-Memory Machines", University of California, Berkeley, Department of EECS, Tech. Rep. UCB/CSD-88-408

This survey paper cites 20 papers co-authored by Vishkin

Cites 19 papers co-authored by Vishkin

edit