Developing AlphaGo of Healthcare
Edward Y. Chang
President, HTC Health Care, Taiwan
ABSTRACT DeepMind’s AlphaGo employs deep learning and reinforcement learning to beat one of the best human Go players. At HTC since 2012, my team has been embarking on improving healthcare quality via artificial intelligence. In this talk, I will present three healthcare related initiatives: Tricorder, Healthbox, and VR Health, and explain how artificial intelligence techniques play pivotal roles in improving diagnosis accuracy and treatment effectiveness. Specifically, this talk presents scalable algorithms of deep learning, reinforcement learning, and transfer learning to tackle both the big and small characteristics of healthcare data. The Healthbox and VR initiatives were recently awarded several top prizes at 2016 CES and MWC, and Tricorder advanced to the final round of the Qualcomm Tricorder XPRIZE competition. My team has also launched via app stores Medication Management and Symptom Checker, both are powered by crowd sourcing and artificial intelligence.
BIOSKETCH Dr. Edward Chang is the President of HTC Health Care. At HTC, He helps develop IoT, cloud computing, and big data platforms to power several novel applications. His most notable project is the Tricorder project, which he co-leads (with Prof. CK Peng at Harvard) a team of physicians, scientists, and engineers to design and develop mobile wireless diagnostic instruments that can help consumers make their own reliable health diagnoses anywhere at anytime. The project entered the Qualcomm Tricorder XPRIZE competition in 2013 with 254 other entrants and was selected as one of the ten finalists in August 2014 to advance to the final round (grand prizes to be announced in January 2016). Prior to his HTC post, Ed was a director of Google Research for 6.5 years, leading research and development in several areas including big data mining, indoor localization, Web search (spam fighting), and social networking & search integration. His contributions in parallel machine learning algorithms and big-data mining are recognized through several keynote invitations (see Stanford MMDS/ACM CIKM/ACM CIVR/ACM MM/AAIM/ADMA keynote deck and tutorial deck for details), and the developed open-source codes (PSVM, PLDA+, Parallel Spectral Clustering, and Parallel Frequent Pattern Mining) have been collectively downloaded over 12,000 times. His work on indoor localization with project X was deployed via Google Maps (see XINX paper and the editor summary of his ASIST/ACM SIGIR/ICADL keynotes). Ed’s team also developed the Google Q&A system (codename Confucius), which was launched in 60+ countries including China, Russia, Thailand, Vietnam, and Indonesia, as well as 17 Arab and 40 African nations. Ed’s book titled Foundations of Large-Scale Multimedia Information Management and Retrieval provides a good summary of his experience in applying big data techniques to feature extraction, learning, and indexing for organizing multimedia data to support both management and retrieval. Prior to Google, Ed was a full professor of Electrical Engineering at the University of California, Santa Barbara (UCSB). He joined UCSB in 1999 after receiving his PhD from Stanford University, and was tenured in 2003 and promoted to full professor in 2006. Ed has served on ACM (SIGMOD, KDD, MM, CIKM), VLDB, IEEE, WWW, and SIAM conference program committees, and co-chaired several conferences including MMM, ACM MM, ICDE, and WWW. He is a recipient of the NSF Career Award, IBM Faculty Partnership Award, and Google Innovation Award.
Human-Powered Multimedia Collection and Labelling, Challenges and Opportunities
Hong Kong University of Science and Technology, China
ABSTRACT Crowdsourcing is a new computing paradigm where humans are enrolled actively to participate into the procedure of computing, especially for the tasks that are intrinsically easier for human than for computers. Not surprisingly, with the development of mobile Internet, the magic power of crowdsourcing is now expanding to physical world, where each user is treated as a mobile computing unit that can be activated and guided for certain tasks. Such practice becomes quite popular in many real multimedia applications, such as data collection and labelling. In this talk, I will first briefly review the history of crowdsourcing and discuss the key challenges related to human-powered multimedia collection and labeling. Then, I will demonstrate several possible solutions to address these challenges, including incentive design, task assignment and quality control. Finally, I will highlight some future opportunities in this area.
BIOSKETCH Dr. Lei Chen received the BS degree in computer science and engineering from Tianjin University, Tianjin, China, in 1994, the MA degree from Asian Institute of Technology, Bangkok, Thailand, in 1997, and the PhD degree in computer science from the University of Waterloo, Canada, in 2005. He is currently an associate professor in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology. His research interests include crowdsourcing over social media, social media analysis, probabilistic and uncertain databases, and privacy-preserved data publishing. The system developed by his team won the excellent demonstration award in VLDB 2014. He got the SIGMOD Test-of-Time Award in 2015. He is PC Track chairs for SIGMOD 2014, VLDB 2014, ICDE 2012, CIKM 2012, SIGMM 2011. He has served as PC members for SIGMOD, VLDB, ICDE, SIGMM, and WWW. Currently, he serves as Editor-in-Chief of VLDB Journal and an associate editor-in-chief of IEEE Transaction on Data and Knowledge Engineering. He is a member of the VLDB endowment.
On Application-Aware Information Extraction for Big Data in Social Networks
ACM Fellow, IEEE Fellow
National Taiwan University, Taiwan
ABSTRACT Due to the paradigm shift to the Cloud computing, data has been accumulated at fast pace in various applications. Among others, the number of social network activities is increasing drastically. It has become very desirable to conduct various analyses for applications on social networks. However, as the scale of a social network has become prohibitively large, it is infeasible to scrutinize the data and extract the key essence from the entire social network. This issue becomes further complicated due to the heterogeneous nature of the data. As a result, a significant amount of research effort has been elaborated upon extracting the essential application-dependent information from a social network. In this talk, we shall examine some recent studies on data processing and information extraction for social networks. Explicitly, we shall explore the methods for three levels of information extraction in a social network, namely, parameter extraction, information extraction, and structure extraction, and interpret them from their respective objectives. We then comment on how to conduct application-aware information extraction for big data in social networks.
BIOSKETCH Dr. Ming-Syan Chen received the Ph.D. degrees in Computer, Information and Control Engineering from the University of Michigan, Ann Arbor, MI, USA. He is now the Dean of the College of Electrical Engineering and Computer Science and also a Distinguished Professor in EE Department at National Taiwan University. He was a research staff member at IBM Thomas J. Watson Research Center, NY, USA, the President/CEO of Institute for Information Industry (III), and the Director of Research Center of Information Technology Innovation (CITI) in the Academia Sinica. His research interests include databases, data mining, social networks, and IoT applications. He is a recipient of the National Chair Professorship and also the Academic Award of the Ministry of Education, the NSC (National Science Council) Distinguished Research Award, Y.Z. Hsu Science Chair Professor Award, Pan Wen Yuan Distinguished Research Award, Teco Award, Honorary Medal of Information, and K.-T. Li Research Breakthrough Award for his research work, and also the Outstanding Innovation Award from IBM Corporate for his contribution to a major database product. Dr. Chen is a Fellow of ACM and a Fellow of IEEE.
Face Recognition Research: Beyond the Limit of Accuracy
NEC Corporation, Japan
ABSTRACT In recent years, the expectation of biometrics authentication has been heightened due to the increased viciousness of crimes and threats of terrorism. Face recognition has an advantage in that the system can recognize a person at a distance even if the person is not aware of it. The system is widely used, for example, in civil ID authentication, surveillance applications and criminal ID searches. Though its accuracy has improved considerably in these 20 years, it remains the most important issue with respect to the changing of individual faces and environment changes of captured faces. In this presentation, I will focus on the topic of face recognition accuracy. The image utilized in the face recognition system is mainly categorized into a controlled and an uncontrolled one. Controlled images are taken under a controlled situation like passport and driver’s license photos and mugshot images. The accuracy is comparably better than that of uncontrolled images. However it still remains a problem that false rejection rate increases rapidly in the range of low false acceptance rate compared to fingerprint and iris recognition. On the other hand, common examples of uncontrolled images are in a LFW database that is gathered from uploaded images on the web. The accuracy is not as high, but the accuracy has improved remarkably these past few years by using an algorithm based on deep learning techniques. In addition, I will talk about problems in real applications induced by face recognition accuracy.
BIOSKETCH Dr. Hitoshi Imaoka is a research fellow in Information and Media Processing Laboratories, NEC Corporation. He received a Master and PhD degree in applied physics from Osaka University. He started up the R&D division at NEC in 1997. In 2002, he started working on the research of face recognition algorithm and holds currently a position as a leader of the face recognition R&D team. The algorithm he has developed evaluated the highest accuracy in the still face track of Multiple-Biometric Evaluation 2010 carried out by the National Institute of Standards and Technology (NIST). His research interests are biometrics and pattern recognition.
Big Data and Smart Industry
Vice President and Director General, Data Analytics Technology & Applications Research Institute
Institute for Information Industry (III), Taiwan
ABSTRACT With the rapid adoption of new technologies such as mobile computing, cloud computing, social networks and the Internet of Things, data is being generated at a very rapid growth rate. These aspects of Big Data, with their characteristic velocity, variety, and variability, can be integrated and processed either in near real-time, or in batches to support decisions at different levels of business. As leading global governmental organizations and private enterprises have begun to apply Big Data Analytics for various public and commercial purposes, they are seeing valuable results in areas such as increased consumer insights, optimized business operations, and more enabled innovative services.
In this talk, we will discuss emerging trends, opportunities and challenges with Big Data Analytics, along with some examples of success in Tourism, Fintech, Agriculture and Healthcare. We will then discuss how can seize the opportunities that Big Data Analytics presents so they can again successfully reinvent themselves, and become a high-value software and service-based economy, reclaiming their leadership in the ICT Industry.
BIOSKETCH Dr. Grace Lin is the VP and Director General of the Data Analytics Technologies & Applications Research Institute of III, a government think tank in Information Technology in Taiwan. Dr. Lin plays a significant role in strategic government plans, and has initiated key industry R&D programs such as Smart Living Strategy Plan, Smart Healthcare, Smart Tourism and Big Data Analytics. Previously, she worked for IBM for more than 16 years and was a Distinguished Engineer, an Academy of Technology Member, a Global Sense-and-Respond Leader, CTO, and Director for Innovation and Emerging Solutions at IBM Global Business Services. She has served as a Researcher, Manager and Senior Manager at the IBM T.J. Watson Research Center, and as a Relationship Manager for IBM Integrated Supply Chain, IBM GBS Distinguished Engineer and CTO & Director of Emerging Business. She has also served as an Adjunct Full Professor at the Department of IEOR at Columbia University and is an INFORMS Fellow and VP of International Activities.
Referred by Forrester as one of the six “Supply Chain Gurus”, Dr. Lin has co-authored more than 80 technical articles and 8 patents. She has chaired INFORMS Fellow Selection Committee, and served on university advisory boards, National Science Foundation panels in the US, Canada and Ireland, and editorial boards including MSOM, Operations Research, Interface, and Service Science. She has received INFORMS’ Franz Edelman Award, IBM’s Outstanding Technical Achievement Award, the IIE Doctoral Dissertation Award, and Purdue’s Outstanding Industrial Engineer Award.
Disaster Resilience through Big Open Data and Smart Things
Jane W.S. Liu
IEEE Life Fellow
Academia Sinica, Taiwan
ABSTRACT Nowadays, in developed regions, sensors and surveillance systems have observational data literally about everything and everywhere. Governments, businesses, and other non-government entities own data and information needed to support their decisions and operations. Digital representations of buildings and their interior layouts are used for management of increasingly more buildings and so on. Also, increasingly more numerous city and township governments have made information on local public shelters and medical care facilities open and online. Break-the-glass mechanisms for overriding normal data access control policies have enabled the tradeoff between availability of data and rigor of privacy protection during emergencies. It is safe to say that together, existing information sources can provide risk reduction data for disasters of all types and severities. Here, the term disaster risk reduction data refers to data and information that are critical for effective preparedness and response against disasters. Examples include data on inventories of life-saving supplies and structures and functional characteristics of affected buildings. Experiences with past disasters tell us that such data can help save lives and reduce property damages if they are available in time. While such data may not be big in volume, they are varied, dynamic and uncertain, closer in three V’s of the four V’s (Volume, Variety, Velocity and Veracity) definition of big data than many other types of big data. This talk will present an overview of important types of disaster risk reduction data. It then describes examples of systems and applications that exploit big open disaster risk reduction data and smart things to help us and our living environments better prepared and able to respond when disasters strike. Many technological challenges are on the way to pervasive deployment of such systems and applications. The talk will present solutions to overcome some of the critical ones.
BIOSKETCH Dr. Jane W. S. Liu is a Distinguished Visiting Research Fellow of Institute of Information Science and Research Center for Information Technology Innovations of Academia Sinica, Taiwan. Before joining Academia Sinica in 2004, she was a software architect in Microsoft Corporation from 2000 to 2004 and a faculty member of Computer Science Department at the University of Illinois at Urbana-Champaign from 1972 to 2000.
Her research interests have been in the areas of real-time and embedded systems. Since joining Academia Sinica, a focus of her work has been on models, architecture, middleware and tools for building high-quality and affordable human-centric automation and assistive devices and services. Some of them are designed to enhance the quality of life and self-reliance of their users, including elderly individuals. Others are automation tools for care-providing institutions for the purpose of improving quality of care. A major thrust of her recent research is on information and communication technology for disaster preparedness and response. Her work aims to strengthen the underpinnings of several critical technologies. An example is the technology for building active emergency response systems. Such a system uses pervasively smart embedded devices and mobile applications to process standard-compliant disaster alert messages from authorized senders and respond by taking appropriate actions to prevent loss of lives, reduce chance of injuries and minimize property damages and economical losses when the forewarned disaster strikes. Another example is the foundation of tools needed to support crowdsourcing the collection of human sensor data and processing of the data synergistically with data from in-situ physical sensors for disaster surveillance and early warning purposes.
Jane Liu received her doctorate in Electrical Engineering from Massachusetts Institute of Technology. She was the editor-in-chief of IEEE Transactions on Computers from 1996-1999. She serves on program committees of numerous international conferences. She received the Outstanding Technical Achievement and Leadership Award of IEEE Computer Society, Technical Committee on Real-Time Systems in 2005, Information Science Honorary Medal from Taiwan Institute of Information and Computing Machinery in 2008, Linux Golden Penguin Award for special contributions of Taiwan Linux Consortium in 2009 and Consortium in 2009 and Distinguished Educator Award from Computer Science Department, University of Illinois at Urbana-Champaign in 2011. She is a life fellow of IEEE.
Sorting in Space
ACM Fellow, IEEE Fellow, IAPR Fellow
University of Maryland at College Park, USA
ABSTRACT The representation of spatial data is an important issue in computer graphics, computer vision, geographic information systems, and robotics. A wide number of representations is currently in use. Recently, there has been much interest in hierarchical data structures such as quadtrees, octrees, R-trees, etc. The key advantage of these representations is that they provide a way to index into space. In fact, they are little more than multidimensional sorts. They are compact and depending on the nature of the spatial data they save space as well as time and also facilitate operations such as search. In this talk we give a brief overview of hierarchical spatial data structures and related research results. In addition we demonstrate the SAND Browser and the VASCO JAVA applet which illustrate these methods.
BIOSKETCH Dr. Hanan Samet is a Distinguished University Professor of Computer Science at the University of Maryland, College Park and is a member of the Institute for Computer Studies. He is also a member of the Computer Vision Laboratory at the Center for Automation Research where he leads a number of research projects on the use of hierarchical data structures for database applications, geographic information systems, computer graphics, computer vision, image processing, games, robotics, and search. He received the B.S. degree in engineering from UCLA, and the M.S. Degree in operations research and the M.S. and Ph.D. degrees in computer science from Stanford University. His doctoral dissertation dealt with proving the correctness of translations of LISP programs which was the first work in translation validation and the related concept of proof-carrying code. He is the author of the recent book “Foundations of Multidimensional and Metric Data Structures” published by Morgan-Kaufmann, an imprint of Elsevier, in 2006, an award winner in the 2006 best book in Computer and Information Science competition of the Professional and Scholarly Publishers (PSP) Group of the American Publishers Association (AAP), and of the first two books on spatial data structures “Design and Analysis of Spatial Data Structures”, and “Applications of Spatial Data Structures: Computer Graphics, Image Processing, and GIS”, both published by Addison-Wesley in 1990. He is the Founding Editor-In-Chief of the ACM Transactions on Spatial Algorithms and Systems (TSAS), the founding chair of ACM SIGSPATIAL, a recipient of a Science Foundation of Ireland (SFI) Walton Visitor Award at the Centre for Geocomputation at the National University of Ireland at Maynooth (NUIM), 2009 UCGIS Research Award, 2010 CMPS Board of Visitors Award at the University of Maryland, 2011 ACM Paris Kanellakis Theory and Practice Award, 2014 IEEE Computer Society Wallace McDowell Award, and a Fellow of the ACM, IEEE, AAAS, IAPR (International Association for Pattern Recognition), and UCGIS (University Consortium for Geographic Science). He received best paper awards in the 2007 Computers & Graphics Journal, the 2008 ACM SIGMOD and SIGSPATIAL ACMGIS Conferences, the 2012 SIGSPATIAL MobiGIS Workshop, and the 2013 SIGSPATIAL GIR Workshop, as well as a best demo award at the 2011 SIGSPATIAL ACMGIS’11 Conference. His paper at the 2009 IEEE International Conference on Data Engineering (ICDE) was selected as one of the best papers for publication in the IEEE Transactions on Knowledge and Data Engineering. He was elected to the ACM Council as the Capitol Region Representative for the term 1989-1991, and is an ACM Distinguished Speaker.
On Mining Big Data and Social Network Analysis
Philip S. Yu
ACM Fellow, IEEE Fellow
University of Illinois at Chicago, USA
ABSTRACT The problem of big data has become increasingly importance in recent years. On the one hand, the big data is an asset that potentially can offer tremendous value or reward to the data owner. On the other hand, it poses tremendous challenges to distil the value out of the big data. The very nature of the big data poses challenges not only due to its volume, and velocity of being generated, but also its variety and veracity. The challenge is thus how to integrate the information from different sources with different formats and veracities together. Heterogeneous information network model provides an effective way to fuse heterogeneous information across different sources. One of the most critical big data applications is mining social networks. As social networks become increasingly popular, not only the scale of the networks grows rapidly with Facebook having more than 1 billion active users, but also the complexity of the networks increases over time. In this talk, we will discuss the data fusion issues and approaches using social networks as an example.
BIOSKETCH Dr. Philip S. Yu is a Distinguished Professor and the Wexler Chair in Information Technology at the Department of Computer Science, University of Illinois at Chicago. Before joining UIC, he was at the IBM Watson Research Center, where he built a world-renowned data mining and database department. He is a Fellow of ACM and IEEE. Dr. Yu is the recipient of IEEE Computer Society’s 2013 Technical Achievement Award for “pioneering and fundamentally innovative contributions to the scalable indexing, querying, searching, mining and anonymization of big data”. With more than 870 publications and 300 patents, cited more than 62,000 times with an H-index of 116, Dr. Yu is a leader in the data mining and data management community.
Dr. Yu is the Editor-in-Chief of ACM Transactions on Knowledge Discovery from Data. He is on the steering committee of the IEEE Conference on Data Mining and ACM Conference on Information and Knowledge Management and was a member of the IEEE Data Engineering steering committee. He was the Editor-in-Chief of IEEE Transactions on Knowledge and Data Engineering (2001-2004). He received a Research Contributions Award from IEEE Intl. Conference on Data Mining (ICDM) in 2003, the ICDM 2013 10-year Highest-Impact Paper Award, and the EDBT Test of Time Award (2014). Dr. Yu received his PhD from Stanford University.