Prévia do material em texto
A HANDS-ON INTRODUCTION TO DATA SCIENCE CHIRAG SHAH A Hands-On Introduction to Data Science This book introduces the field of data science in a practical and accessible manner, using a hands-on approach that assumes no prior knowledge of the subject. The foundational ideas and techniques of data science are provided independently from technology, allowing students to easily develop a firm understanding of the subject without a strong technical background, as well as being presented with material that will have continual relevance even after tools and technologies change. Using popular data science tools such as Python and R, the book offers many examples of real-life applications, with practice ranging from small to big data. A suite of online material for both instructors and students provides a strong supplement to the book, including datasets, chapter slides, solutions, sample exams, and curriculum suggestions. This entry-level textbook is ideally suited to readers from a range of disciplines wishing to build a practical, working knowledge of data science. Chirag Shah is Associate Professor at University of Washington in Seattle. Before, he was a faculty member at Rutgers University, where he also served as the Coordinator of Data Science concentration for Master of Information. He has been teaching data science and machine learning courses to undergraduate, masters, and Ph.D. students for more than a decade. His research focuses on issues of search and recommendations using data mining and machine learning. Dr. Shah received his M.S. in Computer Science from the University of Massachusetts Amherst, and his Ph.D. in Information Science from the University of North Carolina Chapel Hill. He directs the InfoSeeking Lab, supported by awards from the National Science Foundation (NSF), the National Institute of Health (NIH), the Institute of Museum and Library Services (IMLS), as well as Amazon, Google, and Yahoo!. He was a Visiting Research Scientist at Spotify and has served as a consultant to the United Nations Data Analytics on various data science projects. He is currently working at Amazon in Seattle on large-scale e-commerce data and machine learning problems as Amazon Scholar. A Hands-On Introduction to Data Science CH IRAG SHAH University of Washington University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06–04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781108472449 DOI: 10.1017/9781108560412 © Chirag Shah 2020 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2020 Printed in Singapore by Markono Print Media Pte Ltd A catalogue record for this publication is available from the British Library. ISBN 978-1-108-47244-9 Hardback Additional resources for this publication at www.cambridge.org/shah. Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party Internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. To my amazingly smart and sweet daughters – Sophie, Zoe, and Sarah – for adding colors and curiosity back to doing science and living life! Contents Preface page xv About the Author xx Acknowledgments xxii Part I: Conceptual Introductions 1 1 Introduction 3 1.1 What Is Data Science? 3 1.2 Where Do We See Data Science? 5 1.2.1 Finance 6 1.2.2 Public Policy 7 1.2.3 Politics 8 1.2.4 Healthcare 9 1.2.5 Urban Planning 10 1.2.6 Education 10 1.2.7 Libraries 11 1.3 How Does Data Science Relate to Other Fields? 11 1.3.1 Data Science and Statistics 12 1.3.2 Data Science and Computer Science 13 1.3.3 Data Science and Engineering 13 1.3.4 Data Science and Business Analytics 14 1.3.5 Data Science, Social Science, and Computational Social Science 14 1.4 The Relationship between Data Science and Information Science 15 1.4.1 Information vs. Data 16 1.4.2 Users in Information Science 16 1.4.3 Data Science in Information Schools (iSchools) 17 1.5 Computational Thinking 17 1.6 Skills for Data Science 21 1.7 Tools for Data Science 27 1.8 Issues of Ethics, Bias, and Privacy in Data Science 29 Summary 30 Key Terms 31 Conceptual Questions 32 Hands-On Problems 32 vii 2 Data 37 2.1 Introduction 37 2.2 Data Types 37 2.2.1 Structured Data 38 2.2.2 Unstructured Data 38 2.2.3 Challenges with Unstructured Data 39 2.3 Data Collections 39 2.3.1 Open Data 40 2.3.2 Social Media Data 41 2.3.3 Multimodal Data 41 2.3.4 Data Storage and Presentation 42 2.4 Data Pre-processing 47 2.4.1 Data Cleaning 48 2.4.2 Data Integration 50 2.4.3 Data Transformation 51 2.4.4 Data Reduction 51 2.4.5 Data Discretization 52 Summary 59 Key Terms 60 Conceptual Questions 60 Hands-On Problems 61 Further Reading and Resources 65 3 Techniques 66 3.1 Introduction 66 3.2 Data Analysis and Data Analytics 67 3.3 Descriptive Analysis 67 3.3.1 Variables 68 3.3.2 Frequency Distribution 71 3.3.3 Measures of Centrality 75 3.3.4 Dispersion of a Distribution 77 3.4 Diagnostic Analytics 82 3.4.1 Correlations 82 3.5 Predictive Analytics 84 3.6 Prescriptive Analytics 85 3.7 Exploratory Analysis 86 3.8 Mechanistic Analysis 87 3.8.1 Regression 87 Summary 89 Key Terms 91 Conceptual Questions 92 Hands-On Problems 92 Further Reading and Resources 95 viii Contents Part II: Tools for Data Science 97 4 UNIX 99 4.1 Introduction 99 4.2 Getting Access to UNIX 100 4.3 Connecting to a UNIX Server 102 4.3.1 SSH 102 4.3.2 FTP/SCP/SFTP 104 4.4 Basic Commands 106 4.4.1 File and Directory Manipulation Commands 106 4.4.2 Process-Related Commands 108 4.4.3 Other Useful Commands 109 4.4.4 Shortcuts 109 4.5 Editing on UNIX 110 4.5.1 The vi Editor 110 4.5.2 The Emacs Editor 111 4.6 Redirections and Piping 112 4.7 Solving Small Problems with UNIX 113 Summary 121 Key Terms 121 Conceptual Questions 122 Hands-On Problems 122 Further Reading and Resources 123 5 Python 125 5.1 Introduction 125 5.2 Getting Access to Python 125 5.2.1 Download and Install Python 126 5.2.2 Running Python through Console 126 5.2.3 Using Python through Integrated Development Environment (IDE) 126 5.3 Basic Examples 128 5.4 Control Structures 131 5.5 Statistics Essentials 133 5.5.1 Importing Data 136 5.5.2 Plotting the Data 137 5.5.3 Correlation 138 5.5.4 Linear Regression 138 5.5.5 Multiple Linear Regression 141 5.6 Introduction to Machine Learning 145 5.6.1 What Is Machine Learning? 145 5.6.2 Classification (Supervised Learning) 147 5.6.3 Clustering (Unsupervised Learning) 150 5.6.4 Density Estimation (Unsupervised Learning) 153 ix Contents Summary 155 Key Terms 156 Conceptual Questions 157 Hands-On Problems 157 Further Reading and Resources 159 6 R 161 6.1 Introduction 161 6.2 Getting Access to R 162 6.3 Getting Started with R 163 6.3.1 Basics 163 6.3.2 Control Structures 165 6.3.3 Functions 167 6.3.4 Importing Data 167 6.4 Graphics and Data Visualization 168 6.4.1 Installing ggplot2 168 6.4.2 Loading the Data 169 6.4.3 Plotting the Data 169 6.5 Statistics and Machine Learning 174 6.5.1 Basic Statistics 174 6.5.2 Regression 176 6.5.3 Classification 178 6.5.4 Clustering 180 Summary 182 Key Terms 183 Conceptual Questions 184 Hands-On Problems 184 Further Reading and Resources 185 7 MySQL 187 7.1 Introduction 187 7.2 Getting Started with MySQL 188 7.2.1 Obtaining MySQL 1887.2.2 Logging in to MySQL 188 7.3 Creating and Inserting Records 191 7.3.1 Importing Data 191 7.3.2 Creating a Table 192 7.3.3 Inserting Records 192 7.4 Retrieving Records 193 7.4.1 Reading Details about Tables 193 7.4.2 Retrieving Information from Tables 193 7.5 Searching in MySQL 195 7.5.1 Searching within Field Values 195 7.5.2 Full-Text Searching with Indexing 195 x Contents 7.6 Accessing MySQL with Python 196 7.7 Accessing MySQL with R 199 7.8 Introduction to Other Popular Databases 200 7.8.1 NoSQL 200 7.8.2 MongoDB 201 7.8.3 Google BigQuery 201 Summary 202 Key Terms 202 Conceptual Questions 203 Hands-On Problems 203 Further Reading and Resources 204 Part III: Machine Learning for Data Science 207 8 Machine Learning Introduction and Regression 209 8.1 Introduction 209 8.2 What Is Machine Learning? 210 8.3 Regression 215 8.4 Gradient Descent 220 Summary 229 Key Terms 230 Conceptual Questions 231 Hands-On Problems 231 Further Reading and Resources 233 9 Supervised Learning 235 9.1 Introduction 235 9.2 Logistic Regression 236 9.3 Softmax Regression 244 9.4 Classification with kNN 248 9.5 Decision Tree 252 9.5.1 Decision Rule 256 9.5.2 Classification Rule 257 9.5.3 Association Rule 257 9.6 Random Forest 260 9.7 Naïve Bayes 266 9.8 Support Vector Machine (SVM) 272 Summary 279 Key Terms 280 Conceptual Questions 281 Hands-On Problems 281 Further Reading and Resources 288 xi Contents 10 Unsupervised Learning 290 10.1 Introduction 290 10.2 Agglomerative Clustering 291 10.3 Divisive Clustering 295 10.4 Expectation Maximization (EM) 299 10.5 Introduction to Reinforcement Learning 309 Summary 312 Key Terms 313 Conceptual Questions 314 Hands-On Problems 314 Further Reading and Resources 317 Part IV: Applications, Evaluations, and Methods 319 11 Hands-On with Solving Data Problems 321 11.1 Introduction 321 11.2 Collecting and Analyzing Twitter Data 328 11.3 Collecting and Analyzing YouTube Data 336 11.4 Analyzing Yelp Reviews and Ratings 342 Summary 349 Key Terms 350 Conceptual Questions 350 Practice Questions 351 12 Data Collection, Experimentation, and Evaluation 354 12.1 Introduction 354 12.2 Data Collection Methods 355 12.2.1 Surveys 355 12.2.2 Survey Question Types 355 12.2.3 Survey Audience 357 12.2.4 Survey Services 358 12.2.5 Analyzing Survey Data 359 12.2.6 Pros and Cons of Surveys 360 12.2.7 Interviews and Focus Groups 360 12.2.8 Why Do an Interview? 360 12.2.9 Why Focus Groups? 361 12.2.10 Interview or Focus Group Procedure 361 12.2.11 Analyzing Interview Data 362 12.2.12 Pros and Cons of Interviews and Focus Groups 362 12.2.13 Log and Diary Data 363 12.2.14 User Studies in Lab and Field 364 12.3 Picking Data Collection and Analysis Methods 366 12.3.1 Introduction to Quantitative Methods 366 xii Contents 12.3.2 Introduction to Qualitative Methods 368 12.3.3 Mixed Method Studies 369 12.4 Evaluation 370 12.4.1 Comparing Models 370 12.4.2 Training–Testing and A/B Testing 372 12.4.3 Cross-Validation 374 Summary 376 Key Terms 377 Conceptual Questions 377 Further Reading and Resources 378 Appendices Appendix A:Useful Formulas from Differential Calculus 379 Further Reading and Resources 380 Appendix B:Useful Formulas from Probability 381 Further Reading and Resources 381 Appendix C:Useful Resources 383 C.1 Tutorials 383 C.2 Tools 383 Appendix D:Installing and Configuring Tools 385 D.1 Anaconda 385 D.2 IPython (Jupyter) Notebook 385 D.3 Spyder 387 D.4 R 387 D.5 RStudio 388 Appendix E:Datasets and Data Challenges 390 E.1 Kaggle 390 E.2 RecSys 391 E.3 WSDM 391 E.4 KDD Cup 392 Appendix F:Using Cloud Services 393 F.1 Google Cloud Platform 394 F.2 Hadoop 398 F.3 Microsoft Azure 400 F.4 Amazon Web Services (AWS) 403 Appendix G:Data Science Jobs 407 G.1 Marketing 408 G.2 Corporate Retail and Sales 409 G.3 Legal 409 G.4 Health and Human Services 410 xiii Contents Appendix H:Data Science and Ethics 412 H.1 Data Supply Chain 412 H.2 Bias and Inclusion 414 H.3 Considering Best Practices and Codes of Conduct 414 Appendix I:Data Science for Social Good 416 Index 418 xiv Contents Preface Data science is one of the fastest-growing disciplines at the university level. We see more job postings that require training in data science, more academic appointments in the field, and more courses offered, both online and in traditional settings. It could be argued that data science is nothing novel, but just statistics through a different lens. What matters is that we are living in an era in which the kind of problems that could be solved using data are driving a huge wave of innovations in various industries – from healthcare to education, and from finance to policy-making. More importantly, data and data analysis are playing an increas- ingly large role in our day-to-day life, including in our democracy. Thus, knowing the basics of data and data analysis has become a fundamental skill that everyone needs, even if they do not want to pursue a degree in computer science, statistics, or data science. Recognizing this, many educational institutions have started developing and offering not just degrees and majors in the field but also minors and certificates in data science that are geared toward students who may not become data scientists but could still benefit from data literacy skills in the same way every student learns basic reading, writing, and comprehen- sion skills. This book is not just for data science majors but also for those who want to develop their data literacy. It is organized in a way that provides a very easy entry for almost anyone to become introduced to data science, but it also has enough fuel to take one from that beginning stage to a place where they feel comfortable obtaining and processing data for deriving important insights. In addition to providing basics of data and data processing, the book teaches standard tools and techniques. It also examines implications of the use of data in areas such as privacy, ethics, and fairness. Finally, as the name suggests, this text is meant to provide a hands-on introduction to these topics. Almost everything presented in the book is accompanied by examples and exercises that one could try – sometimes by hand and other times using the tools taught here. In teaching these topics myself, I have found this to be a very effective method. The remainder of this preface explains how this book is organized, how it could be used for fulfilling various teaching needs, and what specific requirements a student needs to meet to make the most out of it. Requirements and Expectations This book is intended for advanced undergraduates or graduate students in information science, computer science, business, education, psychology, sociology, and related fields xv who are interested in data science. It is not meant to provide in-depth treatment of any programming language, tool, or platform. Similarly, while the book covers topics such as machine learning and data mining, it is not structured to give detailed theoretical instruction on them; rather, these topics are covered in the context of applying them to solving various data problems with hands-on exercises. The book assumes very little to no prior exposure to programming or technology. It does, however, expect the student to be comfortable with computational thinking (see Chapter 1) and the basics of statistics (covered in Chapter 3). The student should also have general computer literacy, including skills to download, install, and configure software, do file operations, and use online resources. Each chapter lists specific requirements and expecta- tions, many of which can be met by going over some other parts of the book (usually an earlier chapter or an appendix). Almost all the tools and software used in this book are free. There is no requirement of a specific operating system or computer architecture, but it is assumed that the student has a relatively modern computer with reasonable storage, memory, and processing power. In addition, a reliable and preferably high-speedInternet connection is required for several parts of this book. Structure of the Book The book is organized in four parts. Part I includes three chapters that serve as the foundations of data science. Chapter 1 introduces the field of data science, along with various applications. It also points out important differences and similarities with related fields of computer science, statistics, and information science. Chapter 2 describes the nature and structure of data as we encounter today. It initiates the student about data formats, storage, and retrieval infrastructures. Chapter 3 introduces several important techniques for data science. These techniques stem primarily from statistics and include correlation analysis, regression, and introduction to data analytics. Part II of this book includes chapters to introduce various tools and platforms such as UNIX (Chapter 4), Python (Chapter 5), R (Chapter 6), and MySQL (Chapter 7). It is important to keep in mind that, since this is not a programming or database book, the objective here is not to go systematically into various parts of these tools. Rather, we focus on learning the basics and the relevant aspects of these tools to be able to solve various data problems. These chapters therefore are organized around addressing various data-driven problems. In the chapters covering Python and R, we also introduce basic machine learning. But machine learning is a crucial topic for data science that cannot be treated just as an afterthought, which is why Part III of this book is devoted to it. Specifically, Chapter 8 provides a more formal introduction to machine learning and includes a few techniques that are basic and broadly applicable at the same time. Chapter 9 describes in some depth supervised learning methods, and Chapter 10 presents unsupervised learning. It should be noted that, since this book is focused on data science and not core computer science or mathematics, we skip much of the underlying math and formal structuring while discussing xvi Preface and applying machine learning techniques. The chapters in Part III, however, do present machine learning methods and techniques using adequate math in order to discuss the theories and intuitions behind them in detail. Finally, Part IV of this book takes the techniques from Part I, as well as the tools from Parts II and III to start applying them to problems of real-life significance. In Chapter 11, we take this opportunity by applying various data science techniques to several real-life problems, including those involving social media, finance, and social good. Finally, Chapter 12 provides additional coverage into data collection, experimentation, and evaluation. The book is full of extra material that either adds more value and knowledge to your existing data science theories and practices, or provides broader and deeper treatment of some of the topics. Throughout the book, there are several FYI boxes that provide important and relevant information without interrupting the flow of the text, allowing the student to be aware of various issues such as privacy, ethics, and fairness without being overwhelmed by them. The appendices of this book provide quick reference to various formulations relating to differential calculus and probability, as well as helpful pointers and instructions for installing and configuring various tools used in the book. For those interested in using cloud-based platforms and tools, there is also an appendix that shows how to sign up, configure, and use them. Another appendix provides listing of various sources for obtaining small to large datasets for more practice and even participate in data challenges to win some cool prizes and recognition. There is also an appendix that provides helpful information related to data science jobs in various fields and what skills one should have to target those calls. Finally, a couple of appendices introduce the ideas of data ethics and data science for social good to inspire you to be a responsible and socially aware data citizen. The book also has an online appendix (OA), accessible through the book’s website at www.cambridge.org/shah, which is regularly updated to reflect any changes in data and other resources. The primary purpose for this online appendix is to provide you with the most current and updated datasets or links to datasets that you can download and use in the dozens of examples and try-it-yourself exercises in the chapters, as well as data problems at the end of the chapters. Look for the icon at various places that inform you that you need to find the needed resource from OA. In the description of that exercise, you will see the specific number (e.g., OA 3.2) that tells you where exactly you should go in the online appendix. Using This Book in Teaching The book is quite deliberately organized around teaching data science to beginner computer science (CS) students or intermediate to advanced non-CS students. The book is modular, making it easier for both students and teachers to cover topics to the desired depth. This makes it quite suitable for the book to be used as a main reference book or textbook for xvii Preface a data science curriculum. The following is a suggested curriculum path in data science using this book. It contains five courses, each lasting a semester or a quarter. • Introduction to data science: Chapters 1 and 2, with some elements from Part II as needed. • Data analytics: Chapter 3, with some elements from Part II as needed. • Problem solving with data or programming for data science: Chapters 4–7. • Machine learning for data science: Chapters 8–10. • Research methods for data science: Chapter 12, with appropriate elements from Chapter 3 and Part II. At the website for this book is a Resources tab with a section labeled “For Instructors.” This section contains sample syllabi for various courses that could be taught using this book, PowerPoint slides for each chapter, and other useful resources such as sample mid- term and final exams. These resources make it easier for someone teaching this course for the first time to adapt the text as needed for his or her own data science curriculum. Each chapter also has several conceptual questions and hands-on problems. The con- ceptual questions could be used in either in-class discussions, for homework, or for quizzes. For each new technique or problem covered in this book, there are at least two hands-on problems. One of these could be used in the class and the other one could be given for homework or exam. Most hands-on exercises in chapters are also immediately followed by hands-on homework exercises that a student could try for further practice, or an instructor could assign as homework or in-class practice assignment. Strengths and Unique Features of This Book Data science has a very visible presence these days, and it is not surprising that there are currently several available books and much material related to the field. A Hands-On Introduction to Data Science is different from the other books in several ways. • It is targeted to students with very basic experience with technology. Students who fit in that category are majoring in information science, business, psychology, sociology, education, health, cognitive science, and indeed any area in which data can be applied. The study of data science should not be limited to those studying computer science or statistics. This book is intended for those audiences. • The book starts by introducing the field of data science without any prior expectation of knowledge on the part of the reader. It then introduces the reader to some foundational ideas and techniques that are independent of technology. This does two things: (1) it provides an easier access point for a student without strong technical background; and (2) it presents material that will continue to be relevant even when tools and technologies change. • Based on my own teaching and curriculum development experiences, I havefound that most data science books on the market are divided into two categories: they are either too technical, making them suitable only for a limited audience, or they are structured to be xviii Preface simply informative, making it hard for the reader to actually use and apply data science tools and techniques. A Hands-On Introduction to Data Science is aimed at a nice middle ground: On the one hand, it is not simply describing data science, but also teaching real hands-on tools (Python, R) and techniques (from basic regression to various forms of machine learning). On the other hand, it does not require students to have a strong technical background to be able to learn and practice data science. • AHands-On Introduction to Data Science also examines implications of the use of data in areas such as privacy, ethics, and fairness. For instance, it discusses how unbalanced data used without enough care with a machine learning technique could lead to biased (and often unfair) predictions. There is also an introduction to the newly formulated General Data Protection Regulations (GDPR) in Europe. • The book provides many examples of real-life applications, as well as practices ranging from small to big data. For instance, Chapter 4 has an example of working with housing data where simple UNIX commands could extract valuable insights. In Chapter 5, we see how multiple linear regression can be easily implemented using Python to learn how advertising spending on various media (TV, radio) could influence sales. Chapter 6 includes an example that uses R to analyze data about wines to predict which ones are of high quality. Chapters 8–10 on machine learning have many real-life and general interest problems from different fields as the reader is introduced to various techniques. Chapter 11 has hands-on exercises for collecting and analyzing social media data from services such as Twitter and YouTube, as well as working with large datasets (Yelp data with more than a million records). Many of the examples can be worked by hand or with everyday software, without requiring specialized tools. This makes it easier for a student to grasp a concept without having to worry about programming structures. This allows the book to be used for non-majors as well as professional certificate courses. • Each chapter has plenty of in-chapter exercises where I walk the reader through solving a data problem using a new technique, homework exercises to do more practice, and more hands-on problems (often using real-life data) at the end of the chapters. There are 37 hands-on solved exercises, 46 hands-on try-it-yourself exercises, and 55 end-of-chapter hands-on problems. • The book is supplemented by a generous set of material for instructors. These instructor resources include curriculum suggestions (even full-length syllabuses for some courses), slides for each chapter, datasets, program scripts, answers and solutions to each exercise, as well as sample mid-term exams and final projects. xix Preface About the Author Dr. Chirag Shah is Associate Professor at University of Washington in Seattle. Before, he was a faculty member at Rutgers University. He is a Senior Member of the Association for Computing Machinery (ACM). He received his Ph.D. in Information Science from University of North Carolina at Chapel Hill and a M.S. in Computer Science from the University of Massachusetts at Amherst. His research interests include studies of interactive information seeking and retrieval, with applications to personalization and recommendation, as well as applying machine learning and data mining techniques to both big data and tiny data problems. He has published several books and peer-reviewed articles in the areas of information seeking and social media. He has developed Coagmento system for collaborative and social searching, IRIS (Information Retrieval and Interaction System) for investigating and implementing interactive IR activities, as well as several systems for collecting and analyzing data from social media channels, including award winning ContextMiner, InfoExtractor, TubeKit, and SOCRATES. He directs the InfoSeeking Lab, where he investigates issues related to information seeking, social media, and neural information retrieval. These research projects are supported by grants from the National Science Foundation (NSF), the National Institute of Health (NIH), the Institute of Museum and Library Services (IMLS), Amazon, Google, and Yahoo!. He also serves as a consultant to the United Nations Data Analytics on various data science projects involving social and political issues, peacekeeping, climate change, and energy. He spent his last sabbatical at Spotify as a Visiting Research Scientist and is currently consulting to Amazon on personaliza- tion and recommendation problems as an Amazon Scholar. Dr. Shah has taught extensively to both undergraduate and graduate (masters and Ph.D.) students on topics of data science, machine learning, information retrieval (IR), human–computer interaction (HCI), and quantitative research methods. He has xx also delivered special courses and tutorials at various international venues, and created massive open online courses (MOOCs) for platforms such as Coursera. He has developed several courses and curricula for data science and advised dozens of undergraduate and graduate students pursuing data science careers. This book is a result of his many years of teaching, advising, researching, and realizing the need for such a resource. chirags@uw.edu http://chiragshah.org @chirag_shah xxi About the Author Acknowledgments A book like this does not happen without a lot of people’s help and it would be rude of me to not acknowledge at least some of those people here. As is the case with almost all of my projects, this one would not have been possible without the love and support of my wife Lori. She not only understands late nights and long weekends working on a project like this, but also keeps me grounded on what matters the most in life – my family, my students, and the small difference that I am trying to make in this world through the knowledge and skills I have. My sweet and smart daughters – Sophie, Zoe, and Sarah – have also kept me connected to the reality while I worked on this book. They have inspired me to look beyond data and information to appreciate the human values behind them. After all, why bother doing anything in this book if it is not helping human knowledge and advancement in some way? I am constantly amazed by my kids’ curiosity and sense of adventure, because those are the qualities one needs in doing any kind of science, and certainly data science. A lot of the analyses and problem solving presented in this book fall under this category, where we are not simply processing some data, but are driven by a sense of curiosity and a quest to derive new knowledge. This book, as I have noted in the Preface, happened organically over many years through developing and teaching various data science classes. And so I need to thank all of those students who sat in my classes or joined online, went through my material, asked questions, provided feedback, and helped me learn more. With every iteration of every class I have taught in data science, things have gotten better. In essence, what you are holding in your hands is the result of the best iteration so far. In addition to hundreds (or thousands, in the case of MOOCs) of students over the years, there are specific students and assistants I need to thank for their direct and substantial contributions to this book. My InfoSeeking Lab assistants Liz Smith and Catherine McGowan have been tremendously helpful in not only proofreading, but also helping with literature review and contributing several pieces of writings. Similarly, Dongho Choi and Soumik Mandal, two of my Ph.D. students, have contributed substantially to some of the writings and many of the examples and exercises presented in this book. If it was notfor the help and dedication of these four people, this book would have been delayed by at least a year. I am also thankful to my Ph.D. students Souvick Ghosh, who provided some writeup on misinformation, and Ruoyuan Gao, for contributing to the topic of fairness and bias. Finally, I am eternally grateful to the wonderful staff of Cambridge University Press for guiding me through the development of this book from the beginning. I would specifically call out Lauren Cowles, Lisa Pinto, and Stefanie Seaton. They have been an amazing team xxii helping me in almost every aspect of this book, ensuring that it meets the highest standards of quality and accessibility that one would expect from the Press. Writing a book is often a painful endeavor, but when you have a support team like this, it becomes possible and even a fun project! I am almost certain that I have forgotten many more people to thank here, but they should know that it was a result of my forgetfulness and not ungratefulness. xxiii Acknowledgments PART I CONCEPTUAL INTRODUCTIONS This part includes three chapters that serve as the foundations of data science. If you have never done anything with data science or statistics, I highly recommend going through this part before proceeding further. If, on the other hand, you have a good background in statistics and a basic knowledge of data storage, formats, and processing, you can easily skim through most of the material here. Chapter 1 introduces the field of data science, along with various applications. It also points out important differences and similarities with related fields of computer science, statistics, and information science. Chapter 2 describes the nature and structure of data as we encounter it today. It initiates the student about data formats, storage, and retrieval infrastructures. Chapter 3 introduces several important techniques for data science. These techniques stem primarily from statistics and include correlation analysis, regression, and introduction to data analytics. Nomatter where you come from, I would still recommend paying attention to some of the sections in Chapter 1 that introduce various basic concepts of data science and how they are related to other disciplines. In my experience, I have also found that various aspects of data pre-processing are often skipped in many data science curricula, but if you want to develop a more comprehensive understanding of data science, I suggest you go through Chapter 2 as well. Finally, even if you have a solid background in statistics, it would not hurt to at least skim through Chapter 3, as it introduces some of the statistical concepts that we will need many times in the rest of the book. 1 1 Introduction “It is a capital mistake to theorize before one has data. Insensibly, one begins to twist the facts to suit theories, instead of theories to suit facts.” — Sherlock Holmes What do you need? • A general understanding of computer and data systems. • A basic understanding of how smartphones and other day-to-day life devices work. What will you learn? • Definitions and notions of data science. • How data science is related to other disciplines. • Computation thinking – a way to solve problems systematically. • What skills data scientists need. 1.1 What Is Data Science? Sherlock Holmes would have loved living in the twenty-first century. We are drenched in data, and so many of our problems (including a murder mystery) can be solved using large amounts of data existing at personal and societal levels. These days it is fair to assume that most people are familiar with the term “data.”We see it everywhere. And if you have a cellphone, then chances are this is something you have encountered frequently. Assuming you are a “connected” person who has a smartphone, you probably have a data plan from your phone service provider. Themost common cellphone plans in the USA include unlimited talk and text, and a limited amount of data – 5 GB, 20 GB, etc. And if you have one of these plans, you knowwell that you are “using data” through your phone and you get charged per usage of that data. You understand that checking your email and posting a picture on a socialmedia platform consumes data. And if you are a curious (or thrifty) sort, you calculate how much data you consume monthly and pick a plan that fits your needs. You may also have come across terms like “data sharing,”when picking a family plan for your phone(s). But there are other places where you may have encountered the notion of data sharing. For instance, if you have concerns about privacy, you may want to know if your cellphone company “shares” data about you with others (including the government). 3 And finally, you may have heard about “data warehouses,” as if data is being kept in big boxes on tall shelves in middle-of-nowhere locations. In the first case, the individual is consuming data by retrieving email messages and posting pictures. In the second scenario concerning data sharing, “data” refers to informa- tion about you. And third, data is used as though it represents a physical object that is being stored somewhere. The nature and the size of “data” in these scenarios vary enormously – from personal to institutional, and from a few kilobytes (kB) to several petabytes (PB). In this book, we will consider these and more scenarios and learn about defining, storing, cleaning, retrieving, and analyzing data – all for the purpose of derivingmeaningful insights toward making decisions and solving problems. And we will use systematic, verifiable, and repeatable processes; or in other words, we will apply scientific approaches and techniques. Finally, we will do almost all of these processes with a hands-on approach. That means we will look at data and situations that generate or use data, and we will manipulate data using tools and techniques. But before we begin, let us look at how others describe data science. FYI: Datum, Data, and Science Webster’s dictionary (https://www.merriam-webster.com/dictionary/datum) defines data as a plural form of datum as “something given or admitted especially as a basis for reasoning or inference.” For the purpose of this book, as is common these days, we will use data for both plural and singular forms. For example, imagine a table containing birthdays of everyone in your class or office. We can consider this whole table (a collection of birthdays) as data. Each birthday is a single point of data, which could be called datum, but we will call that data too. There is also often a debate about what is the difference between data and information. In fact, it is common to use one to define the other (e.g., “data is a piece of information”). We will revisit this later in this chapter when we compare data science and information science. Since we are talking about sciences, it is also important to clarify here what exactly is science. According to the Oxford dictionary (https://en.oxforddictionaries.com/definition/science), science is “systematic study of the structure and behaviour of the physical and natural world through observation and experiment.” When we talk about science, we are interested in using a systematic approach that can allow us to study a phenomenon, often giving us the ability to explain and derive meaningful insights. Frank Lo, the Director of Data Science at Wayfair, says this on datajobs.com: “Data science is a multidisciplinary blend of data inference, algorithm development, and technol- ogy in order to solve analytically complex problems.”1 He goes on to elaborate that data science, at its core, involves uncovering insights from mining data. This happens through exploration of the data using various tools and techniques, testing hypotheses, and creating conclusions with data and analyses as evidence. In one famous article, Davenport and Patil2 called data science “the sexiest job of the twenty-first century.” Listing data-driven companies such as (in alphabetical order) Amazon, eBay,Google, LinkedIn, Microsoft, Twitter, andWalmart, the authors see a data scientist as a hybrid of data hacker, analyst, communicator, and trusted adviser; a Sherlock Holmes for the 4 Introduction twenty-first century. As data scientists face technical limitations and make discoveries to address these problems, they communicate what they have learned and suggest implications for new business directions. They also need to be creative in visually displaying information, and clearly and compellingly showing the patterns they find. One of the data scientist’s most important roles in the field is to advise executives and managers on the implications of the data for their products, services, processes, and decisions. In this book, we will consider data science as a field of study and practice that involves the collection, storage, and processing of data in order to derive important insights into a problem or a phenomenon. Such data may be generated by humans (surveys, logs, etc.) or machines (weather data, road vision, etc.), and could be in different formats (text, audio, video, augmented or virtual reality, etc.). We will also treat data science as an independent field by itself rather than a subset of another domain, such as statistics or computer science. This will become clearer as we look at how data science relates to and differs from various fields and disciplines later in this chapter. Why is data science so important now? Dr. Tara Sinclair, the chief economist at indeed. com since 2013, said, “the number of job postings for ‘data scientist’ grew 57%” year-over- year in the first quarter of 2015.3 Why have both industry and academia recently increased their demand for data science and data scientists? What changed within the past several years? The answer is not surprising: we have a lot of data, we continue to generate a staggering amount of data at an unprecedented and ever-increasing speed, analyzing data wisely necessitates the involvement of competent and well-trained practitioners, and analyzing such data can provide actionable insights. The “3Vmodel” attempts to lay this out in a simple (and catchy) way. These are the three Vs: 1. Velocity: The speed at which data is accumulated. 2. Volume: The size and scope of the data. 3. Variety: The massive array of data and types (structured and unstructured). Each of these three Vs regarding data has dramatically increased in recent years. Specifically, the increasing volume of heterogeneous and unstructured (text, images, and video) data, as well as the possibilities emerging from their analysis, renders data science evermore essential. Figure 1.14 shows the expected volumes of data to reach 40 zettabytes (ZB) by the end of 2020, which is a 50-fold increase in volume than what was available at the beginning of 2010. Howmuch is that really? If your computer has 1 terabytes (TB) hard drive (roughly 1000 GB), 40 ZB is 40 billion times that. To provide a different perspective, the world population is projected to be close to 8 billion by the end of 2020, which means, if we think about data per person, each individual in the world (even the newborns) will have 5 TB of data. 1.2 Where Do We See Data Science? The question should be:Where do we not see data science these days? The great thing about data science is that it is not limited to one facet of society, one domain, or one department of a university; it is virtually everywhere. Let us look at a few examples. 5 1.2 Where Do We See Data Science? 1.2.1 Finance There has been an explosion in the velocity, variety, and volume (that is, the 3Vs) of financial data, just as there has been an exponential growth of data in almost most fields, as we saw in the previous section. Social media activity, mobile interactions, server logs, real- time market feeds, customer service records, transaction details, and information from existing databases combine to create a rich and complex conglomeration of information that experts (*cough, cough*, data scientists!) must tackle. What do financial data scientists do? Through capturing and analyzing new sources of data, building predictive models and running real-time simulations of market events, they help the finance industry obtain the information necessary to make accurate predictions. Data scientists in the financial sector may also partake in fraud detection and risk reduction. Essentially, banks and other loan sanctioning institutions collect a lot of data about the borrower in the initial “paperwork” process. Data science practices can mini- mize the chance of loan defaults via information such as customer profiling, past expenditures, and other essential variables that can be used to analyze the probabilities of risk and default. Data science initiatives even help bankers analyze a customer’s purchasing power to more effectively try to sell additional banking products.6 Still not convinced about the importance of data science in finance? Look no further than your credit history, one of the most popular types of risk management services used by banks and other financial institutions to identify the creditworthiness of potential customers. Companies use machine learning algorithms in analyzing past spending behavior and 40 35 30 25 20 15 10 0.16 0.28 0.48 0.8 1.0 1.8 8.0 D at a vo lu m e (Z B ) Year 40 5 20 06 20 07 20 08 20 09 20 10 20 11 20 12 20 13 20 14 20 15 20 16 20 17 20 19 20 18 20 20 0 Figure 1.1 Increase of data volume in last 15 years. (Source: IDC’s Digital Universe Study, December 2012.5) 6 Introduction patterns to decide the creditworthiness of customers. The credit score, along with other factors, including length of credit history and customer’s age, are in turn used to predict the approximate lending amount that can be safely forwarded to the customer when applying for a new credit card or bank loan. Let us look at a more definitive example. Lending Club is one of the world’s largest online marketplaces that connects borrowers with investors. An inevitable outcome of lending that every lender would like to avoid is default by borrowers. A potential solution to this problem is to build a predictive model from the previous loan dataset that can be used to identify the applicants who are relatively risky for a loan. Lending Club hosts its loan dataset in its data repository (https://www.lendingclub.com/info/download-data.action) and can be obtained from other popular third-party data repositories7 as well. There are various algorithms and approaches that can be applied to create such predictive models. A simple approach of creating such a predictive model from Lending Club loan dataset is demonstrated at KDnuggets8 if you are interested in learning more. 1.2.2 Public Policy Simply put, public policy is the application of policies, regulations, and laws to the problems of society through the actions of government and agencies for the good of a citizenry. Many branches of social sciences (economics, political science, sociology, etc.) are foundational to the creation of public policy. Data science helps governments and agencies gain insights into citizen behaviors that affect the quality of public life, including traffic, public transportation, social welfare, community wellbeing, etc. This information, or data, can be used to develop plans that address the betterment of these areas. It has become easier than ever to obtain useful data about policies and regulations to analyze and create insights. The following open data repositories are examples: (1) US government (https://www.data.gov/) (2) City of Chicago (https://data.cityofchicago.org/) (3) New York City (https://nycopendata.socrata.com/) As of this writing, the data.gov site had more than 200,000 data repositories on diverse topics that anyone can browse, from agriculture to local government, to science and research. The City of Chicago portal offers a data catalog with equally diverse topics, organized in 16 categories, includingadministration and finance, historic preservation, and sanitation. NYC OpenData encompasses datasets organized into 10 categories. Clicking on the category City Government, for instance, brings up 495 individual results. NYC OpenData also organizes its data by city agency, of which 94 are listed, from the Administration for Children’s Services to the Teachers Retirement System. The data is available to all interested parties. A good example of using data to analyze and improve public policy decisions is the Data Science for Social Good project, where various institutions including Nova SBE, Municipality of Cascais, and the University of Chicago will participate in the program for three months, and which will bring together 25 data analytics experts from several 7 1.2 Where Do We See Data Science? countries who will be working on using the open public policy dataset to find clues to solve relevant problems with impact on society, such as: how does an NGO use data to estimate the size of a temporary refugee camp in war zones to organize the provision of help, how to successfully develop and maintain systems that use data to produce social good and inform public policy, etc. The project usually organizes new events in June of every year.9 1.2.3 Politics Politics is a broad term for the process of electing officials who exercise the policies that govern a state. It includes the process of getting policies enacted and the action of the officials wielding the power to do so. Much of the financial support of government is derived from taxes. Recently, the real-time application of data science to politics has skyrocketed. For instance, data scientists analyzed former US President Obama’s 2008 presidential cam- paign success with Internet-based campaign efforts.10 In this New York Times article, the writer quotes Ariana Huffington, editor of The Huffington Post, as saying that, without the Internet, Obama would not have been president. Data scientists have been quite successful in constructing the most accurate voter targeting models and increasing voter participation.11 In 2016, the campaign to elect Donald Trump was a brilliant example of the use of data science in social media to tailor individual messages to individual people. As Twitter has emerged as a major digital PR tool for politics over the last decade, studies12 analyzing the content of tweets from both candidates’ (Trump and Hillary Clinton) Twitter handles as well as the content of their websites found significant difference in the emphasis on traits and issues, main content of tweet, main source of retweet, multimedia use, and the level of civility. While Clinton emphasized her masculine traits and feminine issues in her election campaign more than her feminine traits and masculine issues, Trump focused more to masculine issues, paying no particular attention to his traits. Additionally, Trump used user-generated content as sources of his tweets significantly more often than Clinton. Three-quarters of Clinton’s tweets were original content, in comparison to half of Trump’s tweets, which were retweets of and replies to citizens. Extracting such characteristics from data and connecting them to various outcomes (e.g., public engagement) falls squarely under data science. In fact, later in this book we will have hands-on exercises for collecting and analyzing data from Twitter, including extracting sentiments expressed in those tweets. Of course, we have also seen the dark side of this with the infamous Cambridge Analytica data scandal that surfaced in March 2018.13 This data analytics firm obtained data on approximately 87 million Facebook users from an academic researcher in order to target political ads during the 2016 US presidential campaign. While this case brought to public attention the issue of privacy in data, it was hardly the first one. Over the years, we have witnessed many incidents of advertisers, spammers, and cybercriminals using data, obtained legally or illegally, for pushing an agenda or a rhetoric. We will have more discussion about this later when we talk about ethics, bias, and privacy issues. 8 Introduction 1.2.4 Healthcare Healthcare is another area in which data scientists keep changing their research approach and practices.14 Though the medical industry has always stored data (e.g., clinical studies, insurance information, hospital records), the healthcare industry is now awash in an unprecedented amount of information. This includes biological data such as gene expres- sion, next-generation DNA sequence data, proteomics (study of proteins), and metabolo- mics (chemical “fingerprints” of cellular processes). While diagnostics and disease prevention studies may seem limited, we may see data from or about a much larger population with respect to clinical data and health outcomes data contained in ever more prevalent electronic health records (EHRs), as well as in longitudinal drug and medical claims. With the tools and techniques available today, data scientists can work on massive datasets effectively, combining data from clinical trials with direct observations by practicing physicians. The combination of raw data with necessary resources opens the door for healthcare professionals to better focus on important, patient-centered medical quandaries, such as what treatments work and for whom. The role of data science in healthcare does not stop with big health service providers; it has also revolutionized personal health management in the last decade. Personal wearable health trackers, such as Fitbit, are prime examples of the application of data science in the personal health space. Due to advances in miniaturizing technology, we can now collect most of the data generated by a human body through such trackers, including information about heart rate, blood glucose, sleep patterns, stress levels and even brain activity. Equipped with a wealth of health data, doctors and scientists are pushing the boundaries in health monitoring. Since the rise of personal wearable devices, there has been an incredible amount of research that leverages such devices to study personal health management space. Health trackers and other wearable devices provide the opportunity for investigators to track adherence to physical activity goals with reasonable accuracy across weeks or evenmonths, which was almost impossible when relying on a handful of self-reports or a small number of accelerometry wear periods. A good example of such study is the use of wearable sensors to measure adherence to a physical activity intervention among overweight or obese, post- menopausal women,15 which was conducted over a period of 16 weeks. The study found that using activity-measuring trackers, such as those by Fitbit, high levels of self-monitor- ing were sustained over a long period. Often, even being aware of one’s level of physical activities could be instrumental in supporting or sustaining good behaviors. Apple has partnered with Stanford Medicine16 to collect and analyze data from Apple Watch to identify irregular heart rhythms, including those from potentially serious heart conditions such as atrial fibrillation, which is a leading cause of stroke. Many insurance companies have started providing free or discounted Apple Watch devices to their clients, or have reward programs for those who use such devices in their daily life.17 The data collected through such devices are helping clients, patients, and healthcare providers to better monitor, diagnose, and treat health conditions not possible before. 9 1.2 Where Do We See Data Science? 1.2.5 Urban Planning Many scientists and engineers have come to believe that the field of urban planning is ripe for a significant – and possibly disruptive – change in approach as a result of the new methods of data science. This belief is based on the number of new initiatives in “infor- matics” – the acquisition, integration, and analysis of data to understand and improve urban systems and qualityof life. The Urban Center for Computation and Data (UrbanCCD), at the University of Chicago, traffics in such initiatives. The research center is using advanced computational methods to understand the rapid growth of cities. The center brings together scholars and scientists from the University of Chicago and Argonne National Laboratory18 with architects, city planners, and many others. The UrbanCCD’s director, Charlie Catlett, stresses that global cities are growing quickly enough to outpace traditional tools and methods of urban design and operation. “The consequences,” he writes on the center’s website,19 “are seen in inefficient transportation networks belching greenhouse gases and unplanned city-scale slums with crippling poverty and health challenges. There is an urgent need to apply advanced computational methods and resources to both explore and anticipate the impact of urban expansion and find effective policies and interventions.” On a smaller scale, chicagoshovels.org provides a “Plow Tracker” so residents can track the city’s 300 snow plows in real time. The site uses online tools to help organize a “Snow Corps” – essentially neighbors helping neighbors, like seniors or the disabled – to shovel sidewalks and walkways. The platform’s app lets travelers know when the next bus is arriving. Considering Chicago’s frigid winters, this can be an important service. Similarly, Boston’s Office of New Urban Mechanics created a SnowCOP app to help city managers respond to requests for help during snowstorms. The Office has more than 20 apps designed to improve public services, such as apps that mine data from residents’ mobile phones to address infrastructure projects. But it is not just large cities. Jackson, Michigan, with a population of about 32,000, tracks water usage to identify potentially abandoned homes. The list of uses and potential uses is extensive. 1.2.6 Education According to Joel Klein, former Chancellor of NewYork Public Schools, “when it comes to the intersection of education and technology, simply putting a computer in front of a student, or a child, doesn’t make their lives any easier, or education any better.”20 Technology will definitely have a large part to play in the future of education, but how exactly that happens is still an open question. There is a growing realization among educators and technology evangelists that we are heading toward more data-driven and personalized use of technology in education. And some of that is already happening. The Brookings Institution’s Darrell M. West opened his 2012 report on big data and education by comparing present and future “learning environments.” According to West, today’s students improve their reading skills by reading short stories, taking a test every other 10 Introduction week, and receiving graded papers from teachers. But in the future, West postulates that students will learn to read through “a computerized software program,” the computer constantly measuring and collecting data, linking to websites providing further assistance, and giving the student instant feedback. “At the end of the session,” West says, “his teacher will receive an automated readout on [students in the class] summarizing their reading time, vocabulary knowledge, reading comprehension, and use of supplemental electronic resources.”21 So, in essence, teachers of the future will be data scientists! Big data may be able to provide much-needed resources to various educational struc- tures. Data collection and analysis have the potential to improve the overall state of education. West says, “So-called ‘big data’ make it possible to mine learning information for insights regarding student performance and learning approaches. Rather than rely on periodic test performance, instructors can analyze what students know and what techniques are most effective for each pupil. By focusing on data analytics, teachers can study learning in far more nuanced ways. Online tools enable evaluation of a much wider range of student actions, such as how long they devote to readings, where they get electronic resources, and how quickly they master key concepts.” 1.2.7 Libraries Data science is also frequently applied to libraries. Jeffrey M. Stanton has discussed the overlap between the task of a data science professional and that of a librarian. In his article, he concludes, “In the near future, the ability to fulfill the roles of citizenship will require finding, joining, examining, analyzing, and understanding diverse sources of data […]Who but a librarian will stand ready to give the assistance needed, to make the resources accessible, and to provide a venue for knowledge creation when the community advocate arrives seeking answers?”22 Mark Bieraugel echoes this view in his article on the website of the Association of College and Research Libraries.23 Here, Bieraugel advocates for librarians to create taxonomies, design metadata schemes, and systematize retrieval methods to make big datasets more useful. Even though the role of data science in future libraries as suggested here seems too rosy to be true, in reality it is nearer than you think. Imagine that Alice, a scientist conducting research on diabetes, asks Mark, a research librarian, to help her understand the research gap in previous literature. Armed with the digital technologies, Mark can automate literature reviews for any discipline by reducing ideas and results from thousands of articles into a cohesive bulleted list and then apply data science algorithms, such as network analysis, to visualize trends in emerging lines of research on similar topics. This will make Alice’s job far easier than if she had to painstakingly read all the articles. 1.3 How Does Data Science Relate to Other Fields? While data science has emerged as a field in its own right, as we saw before it is often considered a subdiscipline of a field such as statistics. One could certainly study data 11 1.3 How Does Data Science Relate to Other Fields? science as a part of one of the existing, well-established fields. But, given the nature of data- driven problems and the momentum at which data science has been able to tackle them, a separate slot is warranted for data science – one that is different from those well-established fields, and yet connected to them. Let us look at how data science is similar to and different from other fields. 1.3.1 Data Science and Statistics Priceonomics (a San Francisco-based company that claims to “turn data into stories”) notes that, not long ago, the term “data science” meant nothing to most people, not even to those who actually worked with data.24 A common response to the term was: “Isn’t that just statistics?” Nate Silver does not seem to think data science differs from statistics. The well-known number cruncher behind the media site FiveThirtyEight – and the guy who famously and correctly predicted the electoral outcome of 49 of 50 states in the 2008 US Presidential election, and a perfect 50 for 50 in 2012 – is more than a bit skeptical of the term. However, the performance of his 2016 election prediction model was a dud. The model predicted Democrat-nominee Hillary Clinton’s chance of winning the presidency at 71.4% over Republican-nominee Donald Trump’s 28.6%.25 The only silver lining in his 2016 predic- tion was that it gave Trump a higher chance of winning the electoral college than almost anyone else.26 “I think data-scientist is a sexed up term for a statistician,” Silver told an audience of statisticians in 2013 at the Joint Statistical Meeting.27 The difference between these two closely related fields lies in the invention and advance- ments of modern computers. Statistics was primarily developed to help people deal with pre-computer “data problems,” such as testing the impact of fertilizer in agriculture, or figuring out the accuracy of an estimate from a small sample. Data science emphasizes the data problems of the twenty-first century, such as accessing informationfrom large data- bases, writing computer code to manipulate data, and visualizing data. Andrew Gelman, a statistician at Columbia University, writes that it is “fair to consider statistics … as a subset of data science” and probably the “least important” aspect.28 He suggests that the administrative aspects of dealing with data, such as harvesting, processing, storing, and cleaning, are more central to data science than is hard-core statistics. So, how does the knowledge of these fields blend together? Statistician and data visualizer Nathan Yau of Flowing Data suggests that data scientists should have at least three basic skills:29 1. A strong knowledge of basic statistics (see Chapter 3) and machine learning (see Chapters 8–10) – or at least enough to avoid misinterpreting correlation for causation or extrapolating too much from a small sample size. 2. The computer science skills to take an unruly dataset and use a programming language (like R or Python, see Chapters 5 and 6) to make it easy to analyze. 3. The ability to visualize and express their data and analysis in a way that is meaningful to somebody less conversant in data (see Chapters 2 and 11). 12 Introduction As you can see, this book that you are holding has you covered for most, if not all, of these basic skills (and then some) for data science. 1.3.2 Data Science and Computer Science Perhaps this seems like an obvious application of data science, but computer science involves a number of current and burgeoning initiatives that involve data scientists. Computer scientists have developed numerous techniques and methods, such as (1) data- base (DB) systems that can handle the increasing volume of data in both structured and unstructured formats, expediting data analysis; (2) visualization techniques that help people make sense of data; and (3) algorithms that make it possible to compute complex and heterogeneous data in less time. In truth, data science and computer science overlap and are mutually supportive. Some of the algorithms and techniques developed in the computer science field – such as machine learning algorithms, pattern recognition algorithms, and data visualization techniques – have contributed to the data science discipline. Machine learning is certainly a very crucial part of data science today, and it is hard to do meaningful data science in most domains without at least basic knowledge of machine learning. Fortunately for us, the third part of this book is dedicated to machine learning. While we will not go into so much theoretical depth as a computer scientist would, we are going to see many of the popular and useful machine learning algorithms and techniques applied to various data science problems. 1.3.3 Data Science and Engineering Broadly speaking, engineering in various fields (chemical, civil, computer, mechanical, etc.) has created demand for data scientists and data science methods. Engineers constantly need data to solve problems. Data scientists have been called upon to develop methods and techniques to meet these needs. Likewise, engineers have assisted data scientists. Data science has benefitted from new software and hardware developed via engineering, such as the CPU (central processing unit) and GPU (graphic processing unit) that substantially reduce computing time. Take the example of jobs in civil engineering. The trend has drastically changed in the construction industry due to use of technology in the last few decades. Now it is possible to use “smart” building techniques that are rooted in collecting and analyzing large amounts of heterogeneous data. Thanks to predictive algorithms, it has become possible to estimate the likely cost of construction from the unit price for a specific item, like a guardrail, that contractors are likely to bid given a contractor’s location, time of year, total value, relevant cost indices, etc. In addition, “smart” building techniques have been introduced by use of various technol- ogies. From 3D printing of models that can help predict the weak spots in construction, to use of drones in monitoring the building site during the actual construction phase, all these technologies generate volumes of data that need to be analyzed to engineer the construction design and activity. Thus, through increase in the use of technology for any engineering design and applications, it is inevitable that the role of data science will expand in the future. 13 1.3 How Does Data Science Relate to Other Fields? 1.3.4 Data Science and Business Analytics In general, we can say that the main goal of “doing business” is turning a profit – even with limited resources – through efficient and sustainable manufacturing methods, and effective service models, etc. This demands decision-making based on objective evaluation, for which data analysis is essential. Whether it concerns companies or customers, data related to business is increasingly cheap (easy to obtain, store, and process) and ubiquitous. In addition to the traditional types of data, which are now being digitized through automated procedures, new types of data from mobile devices, wearable sensors, and embedded systems are providing businesses with rich information. New technologies have emerged that seek to help us organize and understand this increasing volume of data. These technologies are employed in business analytics. Business analytics (BA) refers to the skills, technologies, and practices for continuous iterative exploration and investigation of past and current business performance to gain insight and be strategic. BA focuses on developing new perspectives and making sense of performance based on data and statistics. And that is where data science comes in. To fulfill the requirements of BA, data scientists are needed for statistical analysis, including explanatory and predictive modeling and fact-based management, to help drive successful decision-making. There are four types of analytics, each of which holds opportunities for data scientists in business analytics:30 1. Decision analytics: supports decision-making with visual analytics that reflect reasoning. 2. Descriptive analytics: provides insight from historical data with reporting, score cards, clustering, etc. 3. Predictive analytics: employs predictive modeling using statistical andmachine learning techniques. 4. Prescriptive analytics: recommends decisions using optimization, simulation, etc. We will revisit these in Chapter 3. 1.3.5 Data Science, Social Science, and Computational Social Science It may sound weird that social science, which began almost four centuries ago and was primarily concerned with society and relationships among individuals, has anything to do with data science. Enter the twenty-first century, and not only is data science helping social science, but it is also shaping it, even creating a new branch called computational social science. Since its inception, social science has spread into many branches, including but not limited to anthropology, archaeology, economics, linguistics, political science, psychology, public health, and sociology. Each of these branches has established its own standards, procedures, and modes of collecting data over the years. But connecting theories or results 14 Introduction from one discipline to another has become increasingly difficult. This is where computa- tional social science has revolutionized social science research in the last few decades. With the help of data science, computational social science has connected results from multiple disciplines to explore the key urgent question: how will the information revolution in this digital age transform society? Since its inception, computational social science has made tremendous strides in gen- erating arrays of interdisciplinary projects, often in partnership with computer scientists, statisticians, mathematicians, and lately with data scientists. Some of these projects include leveraging tools and algorithms of predictionand machine learning to assist in tackling stubborn policy problems. Others entail applying recent advances in image, text, and speech recognition to classic issues in social science. These projects often demand meth- odological breakthroughs, scaling proven methods to new levels, as well as designing new metrics and interfaces to make research findings intelligible to scholars, administrators, and policy-makers who may lack computational skill but have domain expertise. After reading the above paragraph, if you think computational social science has only borrowed from data science but has nothing to return, you would be wrong. Computational social science raises inevitable questions about the politics and ethics often embedded in data science research, particularly when it is based on sociopolitical problems with real-life applications that have far-reaching consequences. Government policies, people’s mandates in elections, and hiring strategies in the private sector, are prime examples of such applications. 1.4 The Relationship between Data Science and Information Science While this book is broad enough to be useful for anyone interested in data science, some aspects are targeted at people interested in or working in information-intensive domains. These include many contemporary jobs that are known as “knowledge work,” such as those in healthcare, pharmaceuticals, finance, policy-making, education, and intelligence. The field of information science, which often stems from computing, computational science, informatics, information technology, or library science, often represents and serves such application areas. The core idea here is to cover people studying, accessing, using, and producing information in various contexts. Let us think about how data science and information science are related. Data is everywhere. Yes, this is the third time I am stating this in this chapter, but this point is that important. Humans and machines are constantly creating new data. Just as natural science focuses on understanding the characteristics and laws that govern natural phenomena, data scientists are interested in investigating the characteristics of data – looking for patterns that reveal how people and society can benefit from data. That perspective often misses the processes and people behind the data, as most researchers and professionals see data from the system side and subsequently focus on quantifying phenomena; they lack an understanding of the users’ perspective. Information scientists, 15 1.4 Relationship between Data and Information Science who look at data in the context they are generated and used, can play an important role that bridges the gap between quantitative analysis and an examination of data that tells a story. 1.4.1 Information vs. Data In an FYI box earlier, we alluded to some connections and differences between data and information. Depending on who you consult, you will get different answers – from seeming differences to a blurred-out line between data and information. To make matters worse, people often use one to mean the other. A traditional view used to be that data is something raw, meaningless, an object that, when analyzed or converted to a useful form, becomes information. Information is also defined as “data that are endowed with meaning and purpose.”31 For example, the number “480,000” is a data point. But when we add an explanation that it represents the number of deaths per year in the USA from cigarette smoking,32 it becomes information. But in many real-world scenarios, the distinction between a meaningful and a meaningless data point is not clear enough for us to differentiate data and information.And therefore, for the purpose of this book, we will not worry about drawing such a line. At the same time, since we are introducing various concepts in this chapter, it is useful for us to at least consider how they are defined in various conceptual frameworks. Let us take one such example. The Data, Information, Knowledge, andWisdom (DIKW) model differentiates the meaning of each concept and suggests a hierarchical system among them.33 Although various authors and scholars offer several interpretations of this model, the model defines data as (1) fact, (2) signal, and (3) symbol. Here, information is differentiated from data in that it is “useful.” Unlike conceptions of data in other dis- ciplines, information science demands and presumes a thorough understanding of informa- tion, considering different contexts and circumstances related to the data that is created, generated, and shared, mostly by human beings. 1.4.2 Users in Information Science Studies in information science have focused on the human side of data and information, in addition to the system perspective. While the system perspective typically supports users’ ability to observe, analyze, and interpret the data, the former allows them to make the data into useful information for their purposes. Different users may not agree on a piece of information’s relevancy depending on various factors that affect judgment, such as “usefulness.”34 Usefulness is a criterion that determines how useful is the interaction between the user and the information object (data) in accomplishing the task or goal of the user. For example, a general user who wants to figure out if drinking coffee is injurious to health may find information in the search engine result pages (SERP) to be useful, whereas a dietitian who needs to decide if it is OK to recommend a patient to consume coffee may find the same result in SERP worthless. Therefore, operationalization of the criterion of usefulness will be specific to the user’s task. Scholars in information science tend to combine the user side and the system side to understand how and why data is generated and the information they convey, given a 16 Introduction context. This is often then connected to studying people’s behaviors. For instance, informa- tion scientists may collect log data of one’s browser activities to understand one’s search behaviors (the search terms they use, the results they click, the amount of time they spend on various sites, etc.). This could allow them to create better methods for personalization and recommendation. 1.4.3 Data Science in Information Schools (iSchools) There are several advantages to studying data science in information schools, or iSchools. Data science provides students a more nuanced understanding of individual, community, and society-wide phenomena. Students may, for instance, apply data collected from a particular community to enhance that locale’s wellbeing through policy change and/or urban planning. Essentially, an iSchool curriculum helps students acquire diverse perspec- tives on data and information. This becomes an advantage as students transition into full- fledged data scientists with a grasp on the big (data) picture. In addition to all the required data science skills and knowledge (including understanding computer science, statistics, machine learning, etc.), the focus on the human factor gives students distinct opportunities. An iSchool curriculum also provides a depth of contextual understanding of information. Studying data science in an iSchool offers unique chances to understand data in contexts including communications, information studies, library science, and media research. The difference between studying data science in an iSchool, as opposed to within a computer science or statistics program, is that the former tends to focus on analyzing data and extracting insightful information grounded in context. This is why the study of “where information comes from” is as equally important as “what it represents,” and “how it can be turned into a valuable resource in the creation of business and information technology strategies.” For instance, in the case of analyzing electronic health records, researchers at iSchools are additionally interested in investigating how corresponding patients perceive and seekhealth-related information and support from both professionals and peers. In short, if you are interested in combining the technical with the practical, as well as the human, you would be right at home in an iSchool’s data science department. 1.5 Computational Thinking Many skills are considered “basic” for everyone. These include reading, writing, and thinking. It does not matter what gender, profession, or discipline one belongs to; one should have all these abilities. In today’s world, computational thinking is becoming an essential skill, not reserved for computer scientists only. What is computational thinking? Typically, it means thinking like a computer scientist. But that is not very helpful, even to computer scientists! According to Jeannette Wing,35 “Computational thinking is using abstraction and decomposition when attacking a large complex task or designing a large complex system” (p. 33). It is an iterative process based on the following three stages: 17 1.5 Computational Thinking 1. Problem formulation (abstraction) 2. Solution expression (automation) 3. Solution execution and evaluation (analyses). The three stages and the relationship between them are schematically illustrated in Figure 1.2. Hands-On Example 1.1: Computational Thinking Let us consider an example. We are given the following numbers and are tasked with finding the largest of them: 7, 24, 62, 11, 4, 39, 42, 5, 97, 54. Perhaps you can do it just by looking at it. But let us try doing it “systematically.” Rather than looking at all the numbers at the same time, let us look at two at a time. So, the first two numbers are 7 and 24. Pick the larger of them, which is 24. Now we take that and look at the next number. It is 62. Is it larger than 24? Yes, which means, as of now, 62 is our largest number. The next number is 11. Is it larger than the largest number we know so far, that is, 62? No. So we move on. If you continue this process until you have seen all the remaining numbers, you will end up with 97 as the largest. And that is our answer. Figure 1.2 Three-stage process describing computational thinking. From Repenning, A., Basawapatna, A., & Escherle, N. (2016). Computational thinking tools. In 2016 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC) (pp. 218–222), September. 18 Introduction What did we just do? We broke down a complex problem (looking through 10 numbers) into a set of small problems (comparing two numbers at a time). This process is called decomposition, which refers to identifying small steps to solve a large problem. More than that, we derived a process that could be applied to not just 10 numbers (which is not that complex), but to 100 numbers, 1000 numbers, or a billion numbers! This is called abstraction and generalization. Here, abstraction refers to treating the actual object of interest (10 numbers) as a series of numbers, and generalization refers to being able to devise a process that is applicable to the abstracted quantity (a series of numbers) and not just the specific objects (the given 10 numbers). And there you have an example of computational thinking. We approached a problem to find a solution using a systematic process that can be expressed using clear, feasible computational steps. And that is all. You do not need to know any programming language to do this. Sure, you could write a computer program to carry out this process (an algorithm). But here, our focus is on the thinking behind this. Let us take one more step with the previous example. Assume you are interested not just in the largest number, but also the second largest, third largest, and so on. One way to do this is to sort the numbers in some (increasing or decreasing) order. It looks easy when you have such a small set of numbers. But imagine you have a huge unsorted shelf of books that you want to alphabetize. Not only is this a tougher problem than the previous one, but it becomes increasingly challenging as the number of items increases. So, let us step back and try to think of a systematic approach. A natural way to solve the problem would be just to scan the shelf and look for out-of-order pairs, for instance Rowling, J. K., followed by Lee, Stan, and flipping them around. Flip out-of-order pairs, then continue your scan of the rest of the shelf, and start again at the beginning of the shelf each time you reach the end until you make a complete pass without finding a single out-of-order pair on the entire shelf. That will get your job done. But depending on the size of your collection and how unordered the books are at the beginning of the process, it will take a lot of time. It is not a very efficient tactic. Here is an alternative approach. Let us pick any book at random, say Lee, Stan, and reorder the shelf so that all the books that are earlier (letters to the left of “L” in the dictionary, A–K) than Lee, Stan, are on the left-hand side of it, and the later ones (M–Z) are on the right. At the end of this step, the Lee, Stan, is in its final position, probably near the middle. Next you perform the same steps to the subshelf of the books on the left, and separately to the subshelf of books on the right. Continue this effort until every book is in its final position, and thus the shelf is sorted. Now you might be wondering, what is the easiest way to sort the subshelves? Let us take the same set of numbers from the last example and see how it works. Assume that you have picked the first number, 7, as the chosen one. So, you want all the numbers that are smaller than 7 on the left-hand side of it and the larger ones on the right. You can start by assuming 7 is the lowest number in the queue and therefore its final position will be first, in its current position. Now you compare the rest of the numbers with 7 and adjust its position accordingly. Let us start at the beginning. You have 24 at the beginning of the rest, which is larger than 7. Therefore, the tentative position of 7 remains at the beginning. Next, is 62, which is, again, larger than 7, therefore, no change in the tentative position of 7. Same for the next number, 11. Next, the comparison is between 4 and 7. Unlike the previous three numbers, 4 is smaller than 7. Here, 19 1.5 Computational Thinking your assumption of 7 as the smallest number in the queue is rendered incorrect. So, you need to readjust your assumption of 7 from smallest to second smallest. Here is how to perform the readjustment. First, you have to switch the place of 4 and the number in second position, 24. As a result the queue becomes 7, 4, 62, 11, 24, 39, 42, 5, 97, 54. And the tentative position of 7 has shifted to the second position, right after 4, making the queue 4, 7, 62, 11, 24, 39, 42, 5, 97, 54. Now you might be thinking, why not swap between 7 and 4 instead of 24 and 4. The reason is that you started with the assumption that 7 is the smallest number in the queue. And so far during comparisons you have found just one violation of the assumption; that is, with 4. Therefore, it is logical that at the end of the current comparison you will adjust your assumption to 7 as the second smallest element and 4 as the smallest one, which is reflected by the current queue. Moving on with comparisons, the next numbers in the queue are 39 and 42, both of them are larger than 7, and thus no change in our assumption. The next number is 5, which is, again, smaller than 7. So, you follow the same drill as you did with 4. Swap the third element of the queue with 5 to readjust your assumption as 7 as the third smallest element in the queue and continue the process until you reach the end of the queue. At the end of this step, your queue is transformed into 4, 5, 7, 11, 24, 39, 42, 62, 97, 54, and the initial assumption has evolved, as now 7 is the third smallest number in the queue. So now, 7 has been placed in its final position. Notice that, all the elements to the left (4, 5) of 7 are smaller than 7, and the larger ones are on the right.If you now perform the same set of previous steps with the numbers on the left and separately to the numbers on the right, every number will fall into the right place and you will have a perfectly ordered list of ascending numbers. Once again, a nice characteristic that all these approaches share is that the process for finding a solution is clear, systematic, and repeatable, regardless of the size of the input (number of numbers or books). That is what makes it computationally feasible. Now that you have seen these examples, try finding more problems around you and see if you can practice your computational thinking by devising solutions in this manner. Below are some possibilities to get you started. Try It Yourself 1.1: Computational Thinking For each of the following problem-solving situations, explain how you apply computational thinking, that is, how you abstract the situation, break the complex problem into small subproblems, and bring together subsolutions to solve the problem. 1. Find a one-hour slot in your schedule when the preceding or following event is not at home. 2. Visit five different places while you are running errands with the least amount of travel time and not crossing any road, sidewalk, or location more than once. 20 Introduction 3. Strategize your meetings with potential employers at a job fair so that you can optimize connecting with both high-profile companies (long lines) and startups (short lines). 1.6 Skills for Data Science By now, hopefully you are convinced that: (1) data science is a flourishing and a fantastic field; (2) it is virtually everywhere; and (3) perhaps you want to pursue it as a career! OK, maybe you are still pondering the last one, but if you are convinced about the first two and still holding this book, you may be at least curious about what you should have in your toolkit to be a data scientist. Let us look at carefully what data scientists are, what they do, and what kinds of skills one may need to make their way in and through this field. One Twitter quip36 about data scientists captures their skill set particularly well: “Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician.” In her Harvard Business Review article,37 noted academic and business executive Jeanne Harris listed some skills that employers expect from data scientists: willing to experiment, proficiency in mathematical reasoning, and data literacy. We will explore these concepts in relation to what business professionals are seeking in a potential candidate and why. 1. Willing to Experiment. A data scientist needs to have the drive, intuition, and curiosity not only to solve problems as they are presented, but also to identify and articulate problems on her own. Intellectual curiosity and the ability to experiment require an amalgamation of analytical and creative thinking. To explain this from a more technical perspective, employers are seeking applicants who can ask questions to define intelli- gent hypotheses and to explore the data utilizing basic statistical methods and models. Harris also notes that employers incorporate questions in their application process to determine the degree of curiosity and creative thinking of an applicant – the purpose of these questions is not to elicit a specific correct answer, but to observe the approach and techniques used to discover a possible answer. “Hence, job applicants are often asked questions such as ‘Howmany golf balls would fit in a school bus?’ or ‘Howmany sewer covers are there in Manhattan?’.” 2. Proficiency in Mathematical Reasoning.Mathematical and statistical knowledge is the second critical skill for a potential applicant seeking a job in data science. We are not suggesting that you need a Ph.D. in mathematics or statistics, but you do need to have a strong grasp on the basic statistical methods and how to employ them. Employers are seeking applicants who can demonstrate their ability in reasoning, logic, interpreting data, and developing strategies to perform analysis. Harris further notes that, 21 1.6 Skills for Data Science “interpretation and use of numeric data are going to be increasingly critical in business practices. As a result, an increasing trend in hiring for most companies is to check if applicants are adept at mathematical reasoning.” 3. Data Literacy. Data literacy is the ability to extract meaningful information from a dataset, and any modern business has a collection of data that needs to be interpreted. A skilled data scientist plays an intrinsic role for businesses through an ability to assess a dataset for relevance and suitability for the purpose of interpretation, to perform analysis, and create meaningful visualizations to tell valuable data stories. Harris observes that “data literacy training for business users is now a priority. Managers are being trained to understand which data is suitable, and how to use visualization and simulation to process and interpret it.” Data-driven decision-making is a driving force for innovation in business, and data scientists are integral to this process. Data literacy is an important skill, not just for data scientists, but for all. Scholars and educators have started arguing that, similar to the abilities of reading and writing that are essential in any educational program, data literacy is a basic, fundamental skill, and should be taught to all. More on this can be found in the FYI box that follows. FYI: Data Literacy People often complain when there was a 10% chance of raining and it starts pouring down. This disappointment stems from their lack of understanding of how data is translated to information. In this case, the data comes from prior observations related to weather. Essentially, if there were 100 days observed before (over probably decades) with the same weather conditions (temperature, humidity, pressure, wind, etc.), it rained 10 of those days. And that is conveyed as 10% chance of rain. When people mistake that information as a binary decision (since there is 90% chance of not raining, it will not rain at all), it is as a result of lack of data literacy. There are many other day-to-day life incidents that we encounter in which information is conveyed to us based on some data analysis. Some other examples include ads we see when visiting websites, the way political messages are structured, and resource allocation decisions your town makes. Some of these may look questionable and others affect us in subtle ways that we may not comprehend. But most of these could be resolved if only we had a better understanding of how data turns into information. As we rely more and more on capturing and leveraging large amounts of data for making important decisions that affect every aspect of our lives – from personalized recommendations about what we should buy and who we should date, to self-driving cars and reversing climate change – this issue of data literacy becomes increasingly important. And this is not just for the data scientists, but for everyone who is subjected to such experiences. In fact, in a way, this is more important for everyone other than a data scientist because at least a data scientist will be required to learn this as a part of their training, whereas others may not even realize that they lack such an important skill. If you are an educator, I strongly encourage you to take these ideas – from this book or from other places – and use your position and power to integrate data literacy in whichever way possible. It does not matter if your students are high-schoolers or graduates, whether they are majoring in biology or political science, they all could use a discussion on data literacy. 22 Introduction In another view, Dave Holtz blogs about specific skill sets desired by various positions to which a data scientist may apply. He lists basic types of data science jobs:38 1. A Data Scientist Is a Data Analyst Who Lives in San Francisco!Holtz notes that, for some companies, a data scientist and a data analyst are synonymous. These roles are typically entry-level and will work with pre-existing tools and applications that require the basics skills to retrieve, wrangle, and visualize data. These digital tools may include MySQL databases and advanced functions within Excel such as pivot tables and basic data visualizations (e.g., line and bar charts). Additionally, the data analyst may perform the analysis of experimental testing results or manage other pre-existing analytical toolboxes such as Google Analytics or Tableau. Holtz further notes that, “jobs such as these are excellent entry-level positions, and may even allow a budding data scientist to try new things and expand their skillset.” 2. Please Wrangle Our Data! Companies will discover that they are drowning in data and need someone to develop a data management system and infrastructure that will house the enormous (and growing) dataset, and create access to perform data retrieval and analysis. “Data engineer” and “data scientist” are the typical job titles you will find associated with this type of required skill set and experience. In these scenarios, a candidate will likely be one of the company’s first data hires and thus this person should be able to do the job without significant statistics or machine-learning expertise. A data scientist with a software engineering background might excel at a company like this, where it is more important that they make meaningful data-like contributions to the production code and provide basic insights and analyses. Mentorship opportunities for junior data scientists may be less plentiful at a company like this. As a result, an associate will have great opportunities to shine and grow via trial by fire, but there will be less guidance and a greater risk of flopping or stagnating. 3. WeAre Data. Data Is Us. There are a number of companies for whom their data (or their data analysis platform) is their product. These environments offer intense data analysis or machine learning opportunities. Ideal candidates will likely have a formal mathe- matics, statistics, or physics background and hope to continue down a more academic path. Data scientists at these types of firms would focus more on producing data-driven products than answering operational corporate questions. Companies that fall into this group include consumer-facing organizations with massive amounts of data and com- panies that offer a data-based service. 4. Reasonably Sized Non-Data Companies Who Are Data-Driven. This categorizes many modern businesses. This type of role involves joining an established team of other data scientists. The company evaluates data but is not entirely concerned about data. Its data scientists perform analysis, touch production code, visualize data, etc. These companies are either looking for generalists or they are looking to fill a specific niche where they feel their team is lacking, such as data visualization or machine learning. Some of the more important skills when interviewing at these firms are familiarity with tools designed for “big data” (e.g., Hive or Pig), and experience with messy, real-life datasets. These skills are summarized in Figure 1.3. 23 1.6 Skills for Data Science Hands-On Example 1.2: Analyzing Data Although we have not yet covered any theory, techniques, or tools, we can still get a taste of what it is like to work on a data-driven problem. We will look at an example that gives a glimpse of what kinds of things people do as a data scientist. Specifically, we will start with a data-driven problem, identify a data source, collect data, clean the data, analyze the data, and present our findings. At this point, since I am assuming no prior background in A Data Scientist is a Data Analyst Who Lives in San Francisco Please Wrangle Our Data! Basic Tools Statistics Machine Learning Data Munging Data Visualization and Communication Thinking Like A Data Scientist Very important Somewhat important Not that important Multivariable Calculus and Linear Algebra Software Engineering We Are Data. Data Is Us. Reasonably Sized Non-Data Companies Who Are Data-Driven Figure 1.3 Types of data science roles.39 24 Introduction programming, statistics, or data science techniques, we are going to follow a very simple process and walk through an easy example. Eventually, as you develop a stronger technical background and understand the ins and outs of data science methods, you will be able to tackle problems with bigger datasets and more complex analyses. For this example, we will use the dataset of average heights and weights for American women available from OA 1.1. This file is in comma-separated values (CSV) format – something that we will revisit in the next chapter. For now, go ahead and download it. Once downloaded, you can open this file in a spreadsheet program such as Microsoft Excel or Google Sheets. For your reference, this data is also provided in Table 1.1. As you can see, the dataset contains a sample of 15 observations. Let us consider what is present in the dataset. At the first look, it is clear that the data is already sorted – both the height and weight numbers range from small to large. That makes it easier to see the boundaries of this dataset – height ranges from 58 to 72, and weight ranges from 115 to 164. Next, let us consider averages. We can easily compute average height by adding up the numbers in the “Height” column and dividing by 15 (because that is how many observations we have). That yields a value of 65. In other words, we can conclude that the average height of an American woman is 65 inches, at least according to these 15 observations. Similarly, we can compute the average weight – 136 pounds in this case. The dataset also reveals that an increase in height correlates with the value of weight. This may be clearer using a visualization. If you know any kind of a spreadsheet program (e.g., Microsoft Excel, Google Sheets), you easily generate a plot of values. Figure 1.4 provides an example. Look at the curve. As we move from left to right (Height), the line increases in value (Weight). Table 1.1 Average height and weight of American women. Observation Height (inches) Weight (lbs) 1 58 115 2 59 117 3 60 120 4 61 123 5 62 126 6 63 129 7 64 132 8 65 135 9 66 139 10 67 142 11 68 146 12 69 150 13 70 154 14 71 159 15 72 164 25 1.6 Skills for Data Science Now, let us ask a question: On average, how much increase can we expect in weight with an increase of one inch in height? Think for a moment how you would address this question. Do not proceed until you have figured out a solution yourself. A simple method is to compute the differences in height (72 − 58 = 14 inches) and weight (164− 115 = 49 pounds), then divide the weight difference by the height difference, that is, 49/14, leading to 3.5. In other words, we see that, on average, one inch of height difference leads to a difference of 3.5 pounds in weight. If you want to dig deeper, you may discover that the weight change with respect to the height change is not that uniform. On average, an increase of an inch in height results in an increase of less than 3 pounds in weight for height between 58 and 65 inches (remember that 65 inches is the average). For values of height greater than 65 inches, weight increases more rapidly (by 4 pounds mostly until 70 inches, and 5 pounds for more than 70 inches). Here is another question: What would you expect the weight to be of an American woman who is 57 inches tall? To answer this, we will have to extrapolate the data we have. We know from the previous paragraph that in the lower range of height (less than the average of 65 inches), with each inch of height change, weight changes by about 3 pounds. Given that we know for someone who is 58 inches in height, the corresponding weight is 115 pounds; if we deduct an inch from the height, we should deduct 3 pounds from the weight.This gives us the answer (or at least our guess), 112 pounds. What about the end of the data with the larger values for weight and height? What would you expect the weight of someone who is 73 inches tall to be? 0 20 40 60 80 100 120 140 160 180 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 Weight Height Figure 1.4 Visualization of height vs. weight data. 26 Introduction The correct estimate is 169 pounds. Students should verify this answer. More than the answer, what is important is the process. Can you explain that to someone? Can you document it? Can you repeat it for the same problem but with different values, or for similar problems, in the future? If the answer to these questions is “yes,” then you just practiced some science. Yes, it is important for us not only to solve data-driven problems, but to be able to explain, verify, and repeat that process. And that, in short, is what we are going to do in data science. Try It Yourself 1.2: Analyzing Data Let us practice data analysis methods. For this work, you are going to use a dataset that describes list price (X) and best price (Y) in $1000 for a new GMC pickup truck. The dataset is available from OA 1.2. Use this dataset to predict the best price of a pickup track that has been listed at $24,000 by a dealer. 1.7 Tools for Data Science A couple of sections ago, we discussed what kind of skills one needs to have to be a successful data scientist. We also know by now that a lot of what data scientists do involves processing data and deriving insights. An example was given above, along with a hands-on practice problem. These things should at least give you an idea of what you may expect to do in data science. Going forward, it is important that you develop a solid foundation in statistical techniques (covered in Chapter 3) and computational thinking (covered in an earlier section). And then you need to pick up a couple of programing and data processing tools. Awhole section of this book is devoted to such tools (Part II) and covers some of the most used tools in data science – Python, R, and SQL. But let us quickly review these here so we understand what to expect when we get to those chapters. Let me start by noting that there are no special tools for doing data science; there just happen to be some tools that are more suitable for the kind of things one does in data science. And so, if you already know some programing language (e.g., C, Java, PHP) or a scientific data processing environment (e.g., Matlab), you could use them to solve many or most of the problems and tasks in data science. Of course, if you go through this book, you would also find that Python or R could generate a graph with one line of code – something that could take you a lot more effort in C or Java. In other words, while Python or Rwere not specifically designed for people to do data science, they provide excellent environments for quick implementation, visualization, and testing for most of what one would want to do in data science – at least at the level in which we are interested in this book. Python is a scripting language. It means that programs written in Python do not need to be compiled as a whole like you would do with a program in C or Java; instead, a Python 27 1.7 Tools for Data Science program runs line by line. The language (its syntax and structure) also provides a very easy learning curve for the beginner, yet giving very powerful tools for advanced programmers. Let us see this with an example. If you want to write the classic “Hello, World” program in Java, here is how it goes: Step 1: Write the code and save as HellowWorld.java. public class HelloWorld { public static void main(String[] args) { System.out.println(“Hello, World”); } } Step 2: Compile the code. % javac HelloWorld.java Step 3: Run the program. % java HelloWorld This should display “Hello, World” on the console. Do not worry if you have never done Java (or any) programming before and all this looks confusing. I hope you can at least see that printing a simple message on the screen is quite complicated (we have not even done any data processing!). In contrast, here is how you do the same in Python: Step 1: Write the code and save as hello.py print(“Hello, World”) Step 2: Run the program. % python hello.py Again, donotworry about actually trying this now.Wewill see detailed instructions inChapter 5. For now, at least you can appreciate how easy it is to code in Python. And if you want to accomplish the same in R, you type the same – print(“Hello, World”) – in R console. Both Python and R offer a very easy introduction to programming, and even if you have never done any programming before, it is possible to start solving data problems from day 1 of using either of these. Both of them also offer plenty of packages that you can import or call into them to accomplish more complex tasks such as machine learning (see Part III of this book). Most times in this book we will see data available to us in simple text files formatted as CSV (comma-separated values) and we can load up that data into a Python or R environ- ment. However, such a method has a major limit – the data we could store in a file or load in a computer’s memory cannot be beyond a certain size. In such cases (and for some other reasons), we may need to use a better storage of data in something called an SQL (Structured Query Language) database. The field of this database is very rich with lots of 28 Introduction tools, techniques, and methods for addressing all kinds of data problems. We will, however, limit ourselves to working with SQL databases through Python or R, primarily so that we could work with large and remote datasets. In addition to these top three most used tools for data science (see Appendix G), we will also skim through basic UNIX. Why? Because a UNIX environment allows one to solve many data problems and day-to-day data processing needs without writing any code. After all, there is no perfect tool that could address all our data science needs or meet all of our preferences and constraints. And so, we will pick up several of the most popular tools in data science in this book, while solving data problems using a hands-on approach. 1.8 Issues of Ethics, Bias, and Privacy in Data Science This chapter (and this book) may give an impression that data science is all good, that it is the ultimate path to solve all of society’s and the world’s problems. First of all, I hope you do not buy such exaggerations. Second, even at its best, data science and, in general, anything that deals with data or employs data analysis using a statistical-computation technique, bears several issues that should concern us all – as users or producers of data, or as data scientists. Each of these issues is big and serious enough to warrant their own separate books (and such books exist), but lengthy discussions will be beyond the scope of this book. Instead, we will briefly mention these issues here and call them out at different places throughout this book when appropriate. Many of the issues related to privacy, bias, and ethics can be traced back to the origin of the data. Ask – how, where, and why was the data collected? Who collected it? What did they intend to use it for? More important, if the data was collected from people, did these people know that: (1) such data was being collected about them; and (2) how the data would be used? Often those collecting data mistake availability of data as the right to use that data. For instance, just because data on a social media service such as Twitter is available on the Web, it does not mean that one could collect and sell it for material gain without the consent of the users of that service. In April 2018, a case surfaced that a data analytics firm, Cambridge Analytica, obtained data about a large number of Facebook users to use for political campaigning. Those Facebook users did not even know that: (1) such data about them was collected andshared by Facebook to third parties; and (2) the data was used to target political ads to them. This incident shed light on something that was not really new; for many years, various companies such as Facebook and Google have collected enormous amounts of data about and from their users in order not only to improve and market their products, but also to share and/or sell it to other entities for profit. Worse, most people don’t know about these practices. As the old saying goes, “there is no free lunch.” So, when you are getting an email service or a social media account for “free,” ask why? As it is often understood, “if you are not paying for it, you are the product.” Sure enough, for Facebook, each user is worth $158. Equivalent values for other major companies are: $182/user for Google and $733/user for Amazon.40 29 1.8 Issues of Ethics, Bias, and Privacy in Data Science There are many cases throughout our digital life history where data about users have been intentionally or unintentionally exposed or shared that caused various levels of harm to the users. And this is just the tip of the iceberg in terms of ethical or privacy violations. What we are often not aware of is how even ethically collected data could be highly biased. And if a data scientist is not careful, such inherent bias in the data could show up in the analysis and the insights developed, often without anyone actively noticing it. Many data and technology companies are trying to address these issues, often with very little to no success. But it is admirable that they are trying. And while we also cannot be successful at fending off biases and prejudices or being completely fair, we need to try. So, as we proceed in this book with data collection and analysis methods, keep these issues at the back of your mind. And, wherever appropriate, I will present some pointers in FYI boxes, such as the one below. FYI: Fairness Understanding the gravity of ethics in practicing data analytics, Google, a company that has thrived during the last two decades guided by machine learning, recently acknowledged the biases in traditional machine learning approaches in one of its blog posts. You can read more about this announcement here: https:// developers.google.com/machine-learning/fairness-overview/. In this regard, computational social science has a long way to go to adequately deal with ordinary human biases. Just as with the field of genomics, to which computational social sciences has often been compared, it may well take a generation or two before researchers combine high-level competence in data science with equivalent expertise in anthropology, sociology, political science, and other social science disciplines. There is a community, called Fairness, Accountability, and Transparency (FAT), that has emerged in recent years that is trying to address some of these issues, or at least is shedding a light on them. This community, thankfully, has scholars from fields of data science, machine learning, artificial intelligence, education, information science, and several branches of social sciences. This is a very important topic in data science and machine learning, and, therefore, we will continue discussions throughout this book at appropriate places with such FYI boxes. Summary Data science is new in some ways and not new in other ways. Many would argue that statisticians have already been doing a lot of what today we consider data science. On the other hand, we have an explosion of data in every sector, with data varying a great deal in its nature, format, size, and other aspects. Such data has also become substantially more important in our daily lives – from connecting with our friends and family to doing business. New problems and new opportunities have emerged and we have only scratched the surface of possibilities. It is not enough to simply solve a data problem; we also need to 30 Introduction create new tools, techniques, and methods that offer verifiability, repeatability, and general- izability. This is what data science covers, or at least is meant to cover. And that’s how we are going to present data science in this book. The present chapter provided several views on how people think and talk about data science, how it affects or is connected to various fields, and what kinds of skills a data scientist should have. Using a small example, we practiced (1) data collection, (2) descriptive statistics, (3) correlation, (4) data visualization, (5) model building, and (6) extrapolation and regression analysis. As we progress through various parts of this book, we will dive into all of these and more in detail, and learn scientific methods, tools, and techniques to tackle data-driven problems, helping us derive interesting and important insights for making decisions in various fields – business, education, healthcare, policy-making, and more. Finally, we touched on some of the issues in data science, namely, privacy, bias, and ethics. More discussions on these issues will be considered as we proceed through different topics in this book. In the next chapter, we will learn more about data – types, formats, cleaning, and transforming, among other things. Then, in Chapter 3, we will explore various techniques – most of them statistical in nature. We can learn about them in theory and practice by hand using small examples. But of course, if we want to work with real data, we need to develop some technical skills. For this, wewill acquire several tools in Chapters 4–7, includingUNIX, R, Python, and MySQL. By that time, you should be able to build your own models using various programming tools and statistical techniques to solve data-driven problems. But today’s world needs more than that. So, we will go a few steps further with three chapters on machine learning. Then, in Chapter 11, we will take several real-world examples and applications and see how we can apply all of our data science knowledge to solve problems in various fields and derive decision-making insights. Finally, we will learn (at least on the surface) some of the core methodologies for collecting and analyzing data, as well as evaluating systems and analyses, in Chapter 12. Keep in mind that the appendices discuss much of the background and basic materials. So, make sure to look at appropriate sections in the appendices as you move forward. Key Terms • Data: Information that is factual, such as measurements or statistics, which can be used as a basis for reasoning, discussion, or prediction. • Information: Data that are endowed with meaning and purpose. • Science: The systematic study of the structure and behavior of the physical and natural world through observations and experiments. • Data science: The field of study and practice that involves collection, storage, and processing of data in order to derive important insights into a problem or a phenomenon. 31 Key Terms • Information science: A thorough understanding of information considering different contexts and circumstances related to the data that is created, generated, and shared, mostly by human beings. • Business analytics:The skills, technologies, andpractices for continuous iterative exploration and investigation of past and current business performance to gain insight and be strategic. • Computational thinking:This is a process of using abstraction and decomposition when attacking a large complex task or designing a large complex system. Conceptual Questions 1. What is data science? How does it relate to and differ from statistics? 2. Identify three areas or domains in which data science is being used and describe how. 3. If you are allocated 1 TB data to use on your phone, howmany years will it take until you run out of your quota of 1 GB/month consumption? 4. We saw an example of bias in predicting future crime potential due to misrepresentation in the available data. Find at least two such instances where an analysis, a system, or an algorithm exhibited some sort of biasor prejudice. Hands-On Problems Problem 1.1 Imagine you see yourself as the next Harland Sanders (founder of KFC) and want to learn about the poultry business at a much earlier age thanMr. Sanders did. You want to figure out what kind of feed can help grow healthier chickens. Below is a dataset that might help. The dataset is sourced from OA 1.3. # Weight (lbs) Feed 1 179 Horsebean 2 160 Horsebean 3 136 Horsebean 4 227 Horsebean 5 217 Horsebean 6 168 Horsebean 7 108 Horsebean 8 124 Horsebean 32 Introduction (Cont.) # Weight (lbs) Feed 9 143 Horsebean 10 140 Horsebean 11 309 Linseed 12 229 Linseed 13 181 Linseed 14 141 Linseed 15 260 Linseed 16 203 Linseed 17 148 Linseed 18 169 Linseed 19 213 Linseed 20 257 Linseed 21 244 Linseed 22 271 Linseed 23 243 Soybean 24 230 Soybean 25 248 Soybean 26 327 Soybean 27 329 Soybean 28 250 Soybean 29 193 Soybean 30 271 Soybean 31 316 Soybean 32 267 Soybean 33 199 Soybean 34 171 Soybean 35 158 Soybean 36 248 Soybean 37 423 Sunflower 38 340 Sunflower 39 392 Sunflower 40 339 Sunflower 41 341 Sunflower 42 226 Sunflower 43 320 Sunflower 44 295 Sunflower 45 334 Sunflower 46 322 Sunflower 47 297 Sunflower 48 318 Sunflower 49 325 Meatmeal 50 257 Meatmeal 51 303 Meatmeal 33 Hands-On Problems (Cont.) # Weight (lbs) Feed 52 315 Meatmeal 53 380 Meatmeal 54 153 Meatmeal 55 263 Meatmeal 56 242 Meatmeal 57 206 Meatmeal 58 344 Meatmeal 59 258 Meatmeal 60 368 Casein 61 390 Casein 62 379 Casein 63 260 Casein 64 404 Casein 65 318 Casein 66 352 Casein 67 359 Casein 68 216 Casein 69 222 Casein 70 283 Casein 71 332 Casein Based on this dataset, which type of chicken food appears the most beneficial for a thriving poultry business? Problem 1.2 The following table contains an imaginary dataset of auto insurance providers and their ratings as provided by the latest three customers. Now if you had to choose an auto insurance provider based on these ratings, which one would you opt for? # Insurance provider Rating (out of 10) 1 GEICO 4.7 2 GEICO 8.3 3 GEICO 9.2 4 Progressive 7.4 5 Progressive 6.7 6 Progressive 8.9 7 USAA 3.8 8 USAA 6.3 9 USAA 8.1 34 Introduction Problem 1.3 Imagine you have grown to like Bollywood movies recently and started following some of the well-known actors from the Hindi film industry. Now youwant to predict which of these actor’s movies you should watch when a new one is released. Here is a movie review dataset from the past that might help. It consists of three attributes: movie name, leading actor in the movie, and its IMDB rating. [Note: assume that a better rating means a more watchable movie.] Leading actor Movie name IMDB rating (out of 10) Irfan Khan Knock Out 6.0 Irfan Khan New York 6.8 Irfan Khan Life in a … metro 7.4 Anupam Kher Striker 7.1 Anupam Kher Dirty Politics 2.6 Anil Kapoor Calcutta Mail 6.0 Anil Kapoor Race 6.6 Notes 1. What is data science? https://datajobs.com/what-is-data-science 2. Davenport, T. H., & Patil, D. J. (2012). Data scientist: the sexiest job of the 21st century.Harvard Business Review, October: https://hbr.org/192012/10/data-scientist-the-sexiest-job-of-the-21st- century 3. Fortune.com: Data science is still white hot: http://fortune.com/2015/05/21/data-science- white-hot/ 4. Dhar, V. (2013). Data science and prediction. Communications of the ACM, 56(12), 64–73. 5. Computer Weekly: Data to grow more quickly says IDC’s Digital Universe study: https://www .computerweekly.com/news/2240174381/Data-to-grow-more-quickly-says-IDCs-Digital- Universe-study 6. Analytics Vidhya Content Team. (2015). 13 amazing applications/uses of data science today, Sept. 21: https://www.analyticsvidhya.com/blog/2015/09/applications-data-science/ 7. Kaggle: Lending Club loan data: https://www.kaggle.com/wendykan/lending-club-loan-data 8. Ahmed, S. Loan eligibility prediction: https://www.kdnuggets.com/2018/09/financial-data-ana lysis-loan-eligibility-prediction.html 9. Data Science for Social Good: https://dssg.uchicago.edu/event/using-data-for-social-good-and- public-policy-examples-opportunities-and-challenges/ 10. Miller, C. C. (2008). How Obama’s internet campaign changed politics. The New York Times, Nov. 7: http://bits.blogs.nytimes.com/2008/11/07/how-obamas-internet-campaign-changed- politics/ 11. What you can learn from data science in politics: http://schedule.sxsw.com/2016/events/ event_PP49570 12. Lee, J., & Lim, Y. S. (2016). Gendered campaign tweets: the cases of Hillary Clinton and Donald Trump. Public Relations Review, 42(5), 849–855. 13. Cambridge Analytica: https://en.wikipedia.org/wiki/Cambridge_Analytica 35 Hands-On Problems 14. O’Reilly, T., Loukides, M., & Hill, C. (2015). How data science is transforming health care. O’Reilly. May 4: https://www.oreilly.com/ideas/how-data-science-is-transforming-health-care 15. Cadmus-Bertram, L.,Marcus, B.H., Patterson, R. E., Parker, B. A.,&Morey, B. L. (2015). Use of the Fitbit to measure adherence to a physical activity intervention among overweight or obese, post- menopausal women: self-monitoring trajectory during 16 weeks. JMIR mHealth and uHealth, 3(4). 16. Apple Heart Study: http://med.stanford.edu/appleheartstudy.html 17. Your health insurance might score you an Apple Watch: https://www.engadget.com/2016/09/28/ your-health-insurance-might-score-you-an-apple-watch/ 18. Argonne National Laboratory: http://www.anl.gov/about-argonne 19. Urban Center for Computation and Data: http://www.urbanccd.org/#urbanccd 20. ForbesMagazine. Fixing education with big data: http://www.forbes.com/sites/gilpress/2012/09/ 12/fixing-education-with-big-data-turning-teachers-into-data-scientists/ 21. Brookings Institution. Big data for education: https://www.brookings.edu/research/big-data-for- education-data-mining-data-analytics-and-web-dashboards/ 22. Syracuse University iSchool Blog: https://ischool.syr.edu/infospace/2012/07/16/data-science- whats-in-it-for-the-new-librarian/ 23. ACRL. Keeping up with big data: http://www.ala.org/acrl/publications/keeping_up_with/ big_data 24. Priceonomics. What’s the difference between data science and statistics?: https://priceonomics .com/whats-the-difference-between-data-science-and/ 25. FiveThirtyEight. 2016 election forecast: https://projects.fivethirtyeight.com/2016-election- forecast/ 26. New York Times. 2016 election forecast: https://www.nytimes.com/interactive/2016/upshot/ presidential-polls-forecast.html?_r=0#other-forecasts 27. Mixpanel. This is the difference between statistics and data science: https://blog.mixpanel.com/ 2016/03/30/this-is-the-difference-between-statistics-and-data-science/ 28. Andrew Gelman. Statistics is the least important part of data science: http://andrewgelman.com/ 2013/11/14/statistics-least-important-part-data-science/ 29. Flowingdata. Rise of the data scientist: https://flowingdata.com/2009/06/04/rise-of-the-data- scientist/ 30. Wikipedia. Business analytics: https://en.wikipedia.org/wiki/Business_analytics 31. Wallace, D. P. (2007). Knowledge Management: Historical and Cross-Disciplinary Themes. Libraries Unlimited. pp. 1–14. ISBN 978-1-59158-502-2. 32. CDC. Smoking and tobacco use: https://www.cdc.gov/tobacco/data_statistics/fact_sheets/fas t_facts/index.htm 33. Rowley, J., & Hartley, R. (2006). Organizing Knowledge: An Introduction to Managing Access to Information. Ashgate Publishing. pp. 5–6. ISBN 978-0-7546-4431-6: https://en.wikipedia.org/ wiki/DIKW_Pyramid 34. Belkin, N. J., Cole, M., & Liu, J. (2009). A model for evaluation of interactive information retrieval. In Proceedings of the SIGIR 2009 Workshop on the Future of IR Evaluation (pp. 7–8), July. 35. Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35. 36. @Josh_Wills Tweet on Data scientist: https://twitter.com/josh_wills/status/198093512149958656 37. Harris, J. (2012). Data is useless without the skills to analyze it. Harvard Business Review, Sept. 13: https://hbr.org/2012/09/data-is-useless-without-the-skills38. https://blog.udacity.com/2014/11/data-science-job-skills.html 39. Udacity chart on data scientist skills: http://1onjea25cyhx3uvxgs4vu325.wpengine.netdna-cdn .com/wp-content/uploads/2014/11/blog_dataChart_white.png 40. You are worth $182 to Google, $158 to Facebook and $733 to Amazon! https://arkenea.com/ blog/big-tech-companies-user-worth/ 36 Introduction 2 Data “Data is a precious thing and will last longer than the systems themselves.” — Tim Berners-Lee What do you need? • A basic understanding of data sizes, storage, and access. • Introductory experience with spreadsheets. • Familiarity with basic HTML. What will you learn? • Data types, major data sources, and formats. • How to perform basic data cleaning and transformation. 2.1 Introduction “Just as trees are the raw material from which paper is produced, so too, can data be viewed as the raw material from which information is obtained.”1 To present and interpret informa- tion, one must start with a process of gathering and sorting data. And for any kind of data analysis, one must first identify the right kinds of information sources. In the previous chapter, we discussed different forms of data. The height–weight data we saw was numerical and structured. When you post a picture using your smartphone, that is an example of multimedia data. The datasets mentioned in the section on public policy are government or open data collections. We also discussed how and where this data is stored – from as small and local as our personal computers, to as large and remote as data ware- houses. In this chapter, we will look at these and more variations of data in a more formal way. Specifically, we will discuss data types, data collection, and data formats. We will also see and practice how data is cleaned, stored, and processed. 2.2 Data Types One of the most basic ways to think about data is whether it is structured or not. This is especially important for data science because most of the techniques that we will learn depend on one or the other inherent characteristic. 37 Most commonly, structured data refers to highly organized information that can be seamlessly included in a database and readily searched via simple search operations; whereas unstructured data is essentially the opposite, devoid of any underlying structure. In structured data, different values –whether they are numbers or something else – are labeled, which is not the case when it comes to unstructured data. Let us look at these two types in more detail. 2.2.1 Structured Data Structured data is the most important data type for us, as we will be using it for most of the exercises in this book. Already we have seen it a couple of times. In the previous chapter we discussed an example that included height and weight data. That example included structured data because the data has defined fields or labels; we know “60” to be height and “120” to be weight for a given record (which, in this case, is for one person). But structured data does not need to be strictly numbers. Table 2.1 contains data about some customers. This data includes numbers (age, income, num.vehicles), text (housing. type), Boolean type (is.employed), and categorical data (sex, marital.stat). What matters for us is that any data we see here – whether it is a number, a category, or a text – is labeled. In other words, we know what that number, category, or text means. Pick a data point from the table – say, third row and eighth column. That is “22.” We know from the structure of the table that that data is a number; specifically, it is the age of a customer. Which customer? The one with the ID 2848 and who lives in Georgia. You see how easily we could interpret and use the data since it is in a structured format? Of course, someone would have to collect, store, and present the data in such a format, but for now we will not worry about that. 2.2.2 Unstructured Data Unstructured data is data without labels. Here is an example: “It was found that a female with a height between 65 inches and 67 inches had an IQ of 125–130. However, it was not clear looking at a person shorter or taller than this Table 2.1 Customer data sample. custid sex is.employed income marital.stat housing.type num.vehicles age state.of.res 2068 F NA 11300 Married Homeowner free and clear 2 49 Michigan 2073 F NA 0 Married Rented 3 40 Florida 2848 M True 4500 Never married Rented 3 22 Georgia 5641 M True 20000 Never married Occupied with no rent 0 22 New Mexico 6369 F True 12000 Never married Rented 1 31 Florida 38 Data observation if the change in IQ score could be different, and, even if it was, it could not be possibly concluded that the change was solely due to the difference in one’s height.” In this paragraph, we have several data points: 65, 67, 125–130, female. However, they are not clearly labeled. If we were to do some processing, as we did in the first chapter to try to associate height and IQ, we would not be able to do that easily. And certainly, if we were to create a systematic process (an algorithm, a program) to go through such data or observations, we would be in trouble because that process would not be able to identify which of these numbers corresponds to which of the quantities. Of course, humans have no difficulty understanding a paragraph like this that contains unstructured data. But if we want to do a systematic process for analyzing a large amount of data and creating insights from it, the more structured it is, the better. As I mentioned, in this book for the most part we will work with structured data. But at times when such data is not available, we will look to other ways to convert unstructured data to structured data, or process unstructured data, such as text, directly. 2.2.3 Challenges with Unstructured Data The lack of structure makes compilation and organizing unstructured data a time- and energy-consuming task. It would be easy to derive insights from unstructured data if it could be instantly transformed into structured data. However, structured data is akin to machine language, in that it makes information much easier to be parsed by computers. Unstructured data, on the other hand, is often how humans communicate (“natural language”); but people do not interact naturally with information in strict, database format. For example, email is unstructured data. An individual may arrange their inbox in such a way that it aligns with their organizational preferences, but that does not mean the data is structured. If it were truly fully structured, it would also be arranged by exact subject and content, with no deviation or variability. In practice, this would not work, because even focused emails tend to cover multiple subjects. Spreadsheets, which are arranged in a relational database format and can be quickly scanned for information, are considered structured data. According to Brightplanet®, “The problem that unstructured data presents is one of volume; most business interactions are of this kind, requiring a huge investment of resources to sift through and extract the necessary elements, as in a Web-based search engine.”2 And here is where data science is useful. Because the pool of information is so large, current data mining techniques often miss a substantial amount of available content, much of which could be game-changing if efficiently analyzed. 2.3 Data Collections Now, if you want to find datasets like the one presented in the previous section or in the previous chapter, where would you look? There are many places online to look for sets or collections of data. Here are some of those sources. 39 2.3 Data Collections 2.3.1 Open Data The idea behind open data is that some data should be freely available in a public domain that can be used by anyone as they wish, without restrictions from copyright, patents, or other mechanisms of control. Local and federal governments, non-government organizations (NGOs), and academic communities all lead open data initiatives.For example, you can visit data repositories produced by the US Government3 or the City of Chicago.4 To unlock the true potential of “information as open data,” the White House developed Project Open Data in 2013 – a collection of code, tools, and case studies – to help agencies and individuals adopt the Open Data Policy. To this extent, the US Government released a policy, M-13-3,5 that instructs agencies to manage their data, and information more generally, as an asset from the start, and, wherever possible, release it to the public in a way that makes it open, discoverable, and usable. Following is the list of principles associated with open data as observed in the policy document: • Public. Agencies must adopt a presumption in favor of openness to the extent permitted by law and subject to privacy, confidentiality, security, or other valid restrictions. • Accessible. Open data are made available in convenient, modifiable, and open formats that can be retrieved, downloaded, indexed, and searched. Formats should be machine- readable (i.e., data are reasonably structured to allow automated processing). Open data structures do not discriminate against any person or group of persons and should be made available to the widest range of users for the widest range of purposes, often by providing the data in multiple formats for consumption. To the extent permitted by law, these formats should be non-proprietary, publicly available, and no restrictions should be placed on their use. • Described. Open data are described fully so that consumers of the data have sufficient information to understand their strengths, weaknesses, analytical limitations, and security requirements, as well as how to process them. This involves the use of robust, granular metadata (i.e., fields or elements that describe data), thorough documentation of data elements, data dictionaries, and, if applicable, additional descriptions of the purpose of the collection, the population of interest, the characteristics of the sample, and the method of data collection. • Reusable.Open data are made available under an open license6 that places no restrictions on their use. • Complete.Open data are published in primary forms (i.e., as collected at the source), with the finest possible level of granularity that is practicable and permitted by law and other requirements. Derived or aggregate open data should also be published but must refer- ence the primary data. • Timely.Open data are made available as quickly as necessary to preserve the value of the data. Frequency of release should account for key audiences and downstream needs. • Managed Post-Release.A point of contact must be designated to assist with data use and to respond to complaints about adherence to these open data requirements. 40 Data 2.3.2 Social Media Data Social media has become a gold mine for collecting data to analyze for research or marketing purposes. This is facilitated by the Application Programming Interface (API) that social media companies provide to researchers and developers. Think of the API as a set of rules and methods for asking and sending data. For various data-related needs (e.g., retrieving a user’s profile picture), one could send API requests to a particular social media service. This is typically a programmatic call that results in that service sending a response in a structured data format, such as an XML. We will discuss about XML later in this chapter. The Facebook Graph API is a commonly used example.7 These APIs can be used by any individual or organization to collect and use this data to accomplish a variety of tasks, such as developing new socially impactful applications, research on human information beha- vior, and monitoring the aftermath of natural calamities, etc. Furthermore, to encourage research on niche areas, such datasets have often been released by the social media platform itself. For example, Yelp, a popular crowd-sourced review platform for local businesses, released datasets that have been used for research in a wide range of topics – from automatic photo classification to natural language processing of review texts, and from sentiment analysis to graph mining, etc. If you are interested in learning about and solving such challenges, you can visit the Yelp.com dataset challenge8 to find out more. We will revisit this method of collecting data in later chapters. 2.3.3 Multimodal Data We are living in a world where more and more devices exist – from lightbulbs to cars – and are getting connected to the Internet, creating an emerging trend of the Internet of Things (IoT). These devices are generating and using much data, but not all of which are “tradi- tional” types (numbers, text). When dealing with such contexts, we may need to collect and explore multimodal (different forms) andmultimedia (different media) data such as images, music and other sounds, gestures, body posture, and the use of space. Once the sources are identified, the next thing to consider is the kind of data that can be extracted from those sources. Based on the nature of the information collected from the sources, the data can be categorized into two types: structured data and unstructured data. One of the well-known applications of suchmultimedia data is analysis of brain imaging data sequences – where the sequence can be a series of images from different sensors, or a time series from the same subject. The typical dataset used in this kind of application is a multimodal face dataset, which contains output from different sensors such as EEG, MEG, and fMRI (medical imaging techniques) on the same subject within the same para- digm. In this field, statistical parametric mapping (SPM) is a well-known statistical techni- que, created by Karl Friston,9 that examines differences in brain activity recorded during functional neuroimaging experiments. More on this can be found at the UCL SPMwebsite.10 If you still need more pointers for obtaining datasets, check out Appendix E, which covers not just some of the contemporary sources of datasets, but also active challenges for processing data, and creating and solving real-life problems. 41 2.3 Data Collections 2.3.4 Data Storage and Presentation Depending on its nature, data is stored in various formats. We will start with simple kinds – data in text form. If such data is structured, it is common to store and present it in some kind of delimited way. That means various fields and values of the data are separated using delimiters, such as commas or tabs. And that gives rise to two of the most commonly used formats that store data as simple text – comma-separated values (CSV) and tab-separated values (TSV). 1. CSV (Comma-Separated Values) format is the most common import and export format for spreadsheets and databases. There is no “CSV standard,” so the format is operationally defined by the many applications that read and write it. For example, Depression.csv is a dataset that is available at UF Health, UF Biostatistics11 for down- loading. The dataset represents the effectiveness of different treatment procedures on separate individuals with clinical depression. A snippet of the file is shown below: treat,before,after,diff No Treatment,13,16,3 No Treatment,10,18,8 No Treatment,16,16,0 Placebo,16,13,-3 Placebo,14,12,-2 Placebo,19,12,-7 Seroxat (Paxil),17,15,-2 Seroxat (Paxil),14,19,5 Seroxat (Paxil),20,14,-6 Effexor,17,19,2 Effexor,20,12,-8 Effexor,13,10,-3 In this snippet, the first row mentions the variable names. The remaining rows each individually represent one data point. It should be noted that, for some data points, values of all the columns may not be available. The “Data Pre-processing” section later in this chapter describes how to deal with such missing information. An advantage of the CSV format is that it is more generic and useful when sharing with almost anyone. Why? Because specialized tools to read or manipulate it are not required. Any spreadsheet program such as MicrosoftExcel or Google Sheets can readily open a CSV file and display it correctly most of the time. But there are also several disadvantages. For instance, since the comma is used to separate fields, if the data contains a comma, that could be problematic. This could be addressed by escaping the comma (typically adding a backslash before that comma), but this remedy could be frustrating because not everybody follows such standards. 2. TSV (Tab-Separated Values) files are used for raw data and can be imported into and exported from spreadsheet software. Tab-separated values files are essentially text files, and the raw data can be viewed by text editors, though such files are often used when 42 Data moving raw data between spreadsheets. An example of a TSV file is shown below, along with the advantages and disadvantages of this format. Suppose the registration records of all employees in an office are stored as follows: Name<TAB>Age<TAB>Address Ryan<TAB>33<TAB>1115 W Franklin Paul<TAB>25<TAB>Big Farm Way Jim<TAB>45<TAB>W Main St Samantha<TAB>32<TAB>28 George St where <TAB> denotes a TAB character.12 An advantage of TSV format is that the delimiter (tab) will not need to be avoided because it is unusual to have the tab character within a field. In fact, if the tab character is present, it may have to be removed. On the other hand, TSV is less common than other delimited formats such as CSV. 3. XML (eXtensible Markup Language)was designed to be both human- and machine- readable, and can thus be used to store and transport data. In the real world, computer systems and databases contain data in incompatible formats. As the XML data is stored in plain text format, it provides a software- and hardware-independent way of storing data. This makes it much easier to create data that can be shared by different applications. XML has quickly become the default mechanism for sharing data between disparate information systems. Currently, many information technology departments are deciding between purchasing native XML databases and converting existing data from relational and object-based storage to an XML model that can be shared with business partners. Here is an example of a page of XML: <?xml version=“1.0” encoding=“UTF-8”?> <bookstore> <book category=“information science” cover=“hardcover”> <title lang=“en”>Social Information Seeking</title> <author>Chirag Shah</author> <year>2017</year> <price>62.58</price> </book> <book category=“data science” cover=“paperback”> <title lang=“en”>Hands-On Introduction to Data Science</title> <author>Chirag Shah</author> <year>2019</year> <price>50.00</price> </book> </bookstore> If you have ever worked with HTML, then chances are this should look familiar. But as you can see, unlike HTML, we are using custom tags such as <book> and <price>. 43 2.3 Data Collections That means whosoever reads this will not be able to readily format or process it. But in contrast to HTML, the markup data in XML is not meant for direct visualization. Instead, one could write a program, a script, or an app that specifically parses this markup and uses it according to the context. For instance, one could develop a website that runs in aWeb browser and uses the above data in XML, whereas someone else could write a different code and use this same data in a mobile app. In other words, the data remains the same, but the presentation is different. This is one of the core advantages of XML and one of the reasons XML is becoming quite important as we deal with multiple devices, platforms, and services relying on the same data. 4. RSS (Really Simple Syndication) is a format used to share data between services, and which was defined in the 1.0 version of XML. It facilitates the delivery of information from various sources on the Web. Information provided by a website in an XML file in such a way is called an RSS feed.Most currentWeb browsers can directly read RSS files, but a special RSS reader or aggregator may also be used.13 The format of RSS follows XML standard usage but in addition defines the names of specific tags (some required and some optional), and what kind of information should be stored in them. It was designed to show selected data. So, RSS starts with the XML standard, and then further defines it so that it is more specific. Let us look at a practical example of RSS usage. Imagine you have a website that provides several updates of some information (news, stocks, weather) per day. To keep up with this, and even to simply check if there are any updates, a user will have to continuously return to this website throughout the day. This is not only time-consuming, but also unfruitful as the user may be checking too frequently and encountering no updates, or, conversely, checking not often enough and missing out on crucial informa- tion as it becomes available. Users can check your site faster using an RSS aggregator (a site or program that gathers and sorts out RSS feeds). This aggregator will ensure that it has the information as soon as the website provides it, and then it pushes that information out to the user – often as a notification. Since RSS data is small and fast loading, it can easily be used with services such as mobile phones, personal digital assistants (PDAs), and smart watches. RSS is useful for websites that are updated frequently, such as: • News sites – Lists news with title, date and descriptions. • Companies – Lists news and new products. • Calendars – Lists upcoming events and important days. • Site changes – Lists changed pages or new pages. Do you want to publish your content using RSS? Here is a brief guideline on how to make it happen. First, you need to register your content with RSS aggregator(s). To participate, first create an RSS document and save it with an .xml extension (see example below). Then, upload the file to your website. Finally, register with an RSS aggregator. Each day (or with a frequency you specify) the aggregator searches the registered websites for RSS documents, verifies the link, and displays information about the feed so clients can link to documents that interest them.14 44 Data Here is a sample RSS document. <?xml version=“1.0” encoding=“UTF-8” ?> <rss version=“2.0”> <channel> <title>Dr. Chirag Shah’s Home Page</title> <link>http://chiragshah.org/</link> <description> Chirag Shah’s webhome </description> <item> <title>Awards and Honors</title> <link>http://chiragshah.org/awards .php</link> <description>Awards and Honors Dr. Shah received</description> </item> </channel> </rss> Here, the <channel> element describes the RSS feed, and has three required “child” elements: <title> defines the title of the channel (e.g., Dr. Chirag Shah’s Home Page); <link> defines the hyperlink to the channel (e.g., http://chiragshah.org/); and <description> describes the channel (e.g., About Chirag Shah’s webhome). The <channel> element usually contains one or more <item> elements. Each <item> element defines an article or “story” in the RSS feed. Having an RSS document is not useful if other people cannot reach it. Once your RSS file is ready, you need to get the file up on the Web. Here are the steps: 1. Name your RSS file. Note that the file must have an .xml extension. 2. Validate your RSS file (a good validator can be found at FEED Validator15). 3. Upload the RSS file to your Web directory on your Web server. 4. Copy the little orange or button to your Web directory. 5. Put the little orange “RSS” or “XML” button on the pagewhere youwill offer RSS to the world (e.g., on your home page). Then add a link to the button that links to the RSS file. 6. Submit your RSS feed to the RSS Feed Directories (you can search on Google or Yahoo! for “RSS Feed Directories”). Note that the URL to your feed is not your home page; it is the URL to your feed. 7. Register your feed with the major search engines: ○ Google16 ○ Bing17 8. Update your feed. After registering your RSS feed, you must update your content frequently and ensurethat your RSS feed is constantly available to those aggregators. And that is it. Now, as new information becomes available on your website, it will be noticed by the aggregators and pushed to the users who have subscribed to your feed. 45 2.3 Data Collections 5. JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is not only easy for humans to read and write, but also easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language, Standard ECMA-262, 3rd Edition – December 1999.18 JSON is built on two structures: • A collection of name–value pairs. In various languages, this is realized as an object, record, structure, dictionary, hash table, keyed list, or associative array. • An ordered list of values. In most languages, this is realized as an array, vector, list, or sequence. When exchanging data between a browser and a server, the data can be sent only as text.19 JSON is text, and we can convert any JavaScript object into JSON, and send JSON to the server. We can also convert any JSON received from the server into JavaScript objects. This way we can work with the data as JavaScript objects, with no complicated parsing and translations. Let us look at examples of how one could send and receive data using JSON. 1. Sending data: If the data is stored in a JavaScript object, we can convert the object into JSON, and send it to a server. Below is an example: <!DOCTYPE html> <html> <body> <p id=“demo”></p> <script> var obj = {“name”:“John”, “age”:25, “state”: “New Jersey”}; var obj_JSON = JSON.stringify(obj); window.location = “json_Demo.php?x=” + obj_JSON; </script> </body> </html> 2. Receiving data: If the received data is in JSON format, we can convert it into a JavaScript object. For example: <!DOCTYPE html> <html> <body> <p id=“demo”></p> <script> var obj_JSON = “{“name”:“John”, “age”:25, “state”: “New Jersey”}”; var obj = JSON.parse(obj_JSON); document.getElementById(“demo”).innerHTML=obj.name; </script> </body> </html> 46 Data Now that we have seen several formats of data storage and presentation, it is important to note that these are by no means the only ways to do it, but they are some of the most preferred and commonly used ways. Having familiarized ourselves with data formats, we will now move on with manipulat- ing the data. 2.4 Data Pre-processing Data in the real world is often dirty; that is, it is in need of being cleaned up before it can be used for a desired purpose. This is often called data pre-processing. What makes data “dirty”? Here are some of the factors that indicate that data is not clean or ready to process: • Incomplete. When some of the attribute values are lacking, certain attributes of interest are lacking, or attributes contain only aggregate data. • Noisy. When data contains errors or outliers. For example, some of the data points in a dataset may contain extreme values that can severely affect the dataset’s range. • Inconsistent. Data contains discrepancies in codes or names. For example, if the “Name” column for registration records of employees contains values other than alphabetical letters, or if records do not start with a capital letter, discrepancies are present. Figure 2.1 shows the most important tasks involved in data pre-processing.20 In the subsections that follow, we will consider each of them in detail, and then work through an example to practice these tasks. FYI: Bias in Data It is worth noting here that when we use the term dirty to describe data, we are only referring to the syntactical, formatting, and structural issues with the data, and ignoring all other ways the data could be “muddled up.” What do I mean by this? Take, for instance, data used in a now famous study of facial recognition. The study showed that the operative algorithm performed better for white males than for women and non-white males. Why? Because the underlying data, which included many more instances of white males than black females, was imbalanced. Perhaps this was intentional, perhaps not. But bias is a real issue with many datasets and data sources that are blindly used in analyses. Read more about this study in the NY Times article from February 9, 2018: https://www.nytimes.com/2018/02/09/technology/ facial-recognition-race-artificial-intelligence.html It is important to start with data from a reputable source, but every decision you make in handling data could add subtle errors, adding bias. Introducing errors will tend to be systemic (throughout) and will tend to overemphasize or underemphasize outcomes. Scrutinize your choices so that you are relatively free of favoring a certain outcome. 47 2.4 Data Pre-processing 2.4.1 Data Cleaning Since there are several reasons why data could be “dirty,” there are just as many ways to “clean” it. For this discussion, we will look at three key methods that describe ways in which data may be “cleaned,” or better organized, or scrubbed of potentially incorrect, incomplete, or duplicated information. 2.4.1.1 Data Munging Often, the data is not in a format that is easy to work with. For example, it may be stored or presented in a way that is hard to process. Thus, we need to convert it to something more Data Cleaning Data Integration Data Transformation –17, 25, 39, 128, –39 0.17, 0.25, 0.39, 1.28, –0.39 Data Reduction .... .... T200 A3 ....A2A1 T1 T1 T2 T3 T150 T2 T3 A200 ...A3A2A1 A120 Figure 2.1 Forms of data pre-processing (N.H. Son, Data Cleaning and Data Pre-processing21). 48 Data suitable for a computer to understand. To accomplish this, there is no specific scientific method. The approaches to take are all about manipulating or wrangling (or munging) the data to turn it into something that is more convenient or desirable. This can be done manually, automatically, or, in many cases, semi-automatically. Consider the following text recipe. “Add two diced tomatoes, three cloves of garlic, and a pinch of salt in the mix.” This can be turned into a table (Table 2.2). This table conveys the same information as the text, but it is more “analysis friendly.” Of course, the real question is – How did that sentence get turned into the table? A not-so-encouraging answer is “using whatever means necessary”! I know that is not what you want to hear because it does not sound systematic. Unfortunately, often there is no better or systematic method for wrangling. Not surprisingly, there are people who are hired to do specifically just this – wrangle ill-formatted data into something more manageable. 2.4.1.2 Handling Missing Data Sometimes data may be in the right format, but some of the values are missing. Consider a table containing customer data in which some of the home phone numbers are absent. This could be due to the fact that some people do not have home phones – instead they use their mobile phones as their primary or only phone. Other times data may be missing due to problems with the process of collecting data, or an equipment malfunction. Or, comprehensiveness may not have been con- sidered important at the time of collection. For instance, when we started collecting that customer data, it was limited to a certain city or region, and so the area code for a phone number was not necessary to collect. Well, we may be in trouble once we decide to expand beyond that city or region, because now we will have numbers from all kinds of area codes. Furthermore, some data may get lost due to system or human error while storing or transferring the data. So, what to do when we encounter missing data? There is no single good answer. We need to find a suitable strategy based on the situation. Strategies to combat missing data include ignoring that record, using a global constant to fill in all missing values, imputation, inference-based solutions (Bayesian formula or a decision tree), etc. Wewill revisit some of these inference techniques later in the book in chapters on machine learning and data mining. Table 2.2 Wrangleddata for a recipe. Ingredient Quantity Unit/size Tomato 2 Diced Garlic 3 Cloves Salt 1 Pinch 49 2.4 Data Pre-processing 2.4.1.3 Smooth Noisy Data There are times when the data is not missing, but it is corrupted for some reason. This is, in some ways, a bigger problem than missing data. Data corruption may be a result of faulty data collection instruments, data entry problems, or technology limitations. For example, a digital thermometer measures temperature to one decimal point (e.g., 70.1°F), but the storage system ignores the decimal points. So, now we have 70.1°F and 70.9°F both stored as 70°F. This may not seem like a big deal, but for humans a 99.4°F temperature means you are fine, and 99.8°F means you have a fever, and if our storage system represents both of them as 99°F, then it fails to differentiate between healthy and sick persons! Just as there is no single technique to take care of missing data, there is no one way to remove noise, or smooth out the noisiness in the data. However, there are some steps to try. First, you should identify or remove outliers. For example, records of previous students who sat for a data science examination show all students scored between 70 and 90 points, barring one student who received just 12 points. It is safe to assume that the last student’s record is an outlier (unless we have a reason to believe that this anomaly is really an unfortunate case for a student!). Second, you could try to resolve inconsistencies in the data. For example, all entries of customer names in the sales data should follow the convention of capitalizing all letters, and you could easily correct them if they are not. 2.4.2 Data Integration To be as efficient and effective for various data analyses as possible, data from various sources commonly needs to be integrated. The following steps describe how to integrate multiple databases or files. 1. Combine data from multiple sources into a coherent storage place (e.g., a single file or a database). 2. Engage in schema integration, or the combining of metadata from different sources. 3. Detect and resolve data value conflicts. For example: a. A conflict may arise; for instance, such as the presence of different attributes and values from various sources for the same real-world entity. b. Reasons for this conflict could be different representations or different scales; for example, metric vs. British units. 4. Address redundant data in data integration. Redundant data is commonly generated in the process of integrating multiple databases. For example: a. The same attribute may have different names in different databases. b. One attribute may be a “derived” attribute in another table; for example, annual revenue. c. Correlation analysis may detect instances of redundant data. If this has begun to appear confusing, hang in there – some of these steps will become clearer as we take an example in the next section. 50 Data 2.4.3 Data Transformation Data must be transformed so it is consistent and readable (by a system). The following five processes may be used for data transformation. For the time being, do not worry if these seem too abstract. We will revisit some of them in the next section as we work through an example of data pre-processing. 1. Smoothing: Remove noise from data. 2. Aggregation: Summarization, data cube construction. 3. Generalization: Concept hierarchy climbing. 4. Normalization: Scaled to fall within a small, specified range and aggregation. Some of the techniques that are used for accomplishing normalization (but we will not be covering them here) are: a. Min–max normalization. b. Z-score normalization. c. Normalization by decimal scaling. 5. Attribute or feature construction. a. New attributes constructed from the given ones. Detailed explanation of all of these techniques are out of scope for this book, but later in this chapter we will do a hands-on exercise to practice some of these in simpler forms. 2.4.4 Data Reduction Data reduction is a key process in which a reduced representation of a dataset that produces the same or similar analytical results is obtained. One example of a large dataset that could warrant reduction is a data cube. Data cubes are multidimensional sets of data that can be stored in a spreadsheet. But do not let the name fool you. A data cube could be in two, three, or a higher dimension. Each dimension typically represents an attribute of interest. Now, consider that you are trying to make a decision using this multidimensional data. Sure, each of its attributes (dimensions) provides some information, but perhaps not all of them are equally useful for a given situation. In fact, often we could reduce information from all those dimensions to something much smaller and manageable without losing much. This leads us to two of the most common techniques used for data reduction. 1. Data Cube Aggregation. The lowest level of a data cube is the aggregated data for an individual entity of interest. To do this, use the smallest representation that is sufficient to address the given task. In other words, we reduce the data to its more meaningful size and structure for the task at hand. 2. Dimensionality Reduction. In contrast with the data cube aggregation method, where the data reduction was with the consideration of the task, dimensionality reduction method works with respect to the nature of the data. Here, a dimension or a column in your data spreadsheet is referred to as a “feature,” and the goal of the process is to identify which features to remove or collapse to a combined feature. This requires identifying redundancy 51 2.4 Data Pre-processing in the given data and/or creating composite dimensions or features that could sufficiently represent a set of raw features. Strategies for reduction include sampling, clustering, principal component analysis, etc. We will learn about clustering in multiple chapters in this book as a part of machine learning. The rest are outside the scope of this book. 2.4.5 Data Discretization We are often dealing with data that are collected from processes that are continuous, such as temperature, ambient light, and a company’s stock price. But sometimes we need to convert these continuous values into more manageable parts. This mapping is called discretization. And as you can see, in undertaking discretization, we are also essentially reducing data. Thus, this process of discretization could also be perceived as a means of data reduction, but it holds particular importance for numerical data. There are three types of attributes involved in discretization: a. Nominal: Values from an unordered set b. Ordinal: Values from an ordered set c. Continuous: Real numbers To achieve discretization, divide the range of continuous attributes into intervals. For instance, we could decide to split the range of temperature values into cold, moderate, and hot, or the price of company stock into above or below its market valuation. Hands-On Example 2.1: Data Pre-processing In the previous section, we looked at theoretical (and that often means abstract) explanations of various stages of data processing. Now, let us use a sample dataset and walk through those stages step by step. For this example, we will use a modified version of a dataset of the number of deaths from excessive wine consumption, available from OA 2.1, which we have tweaked (Table 2.3) to explain the pre-processing stages. The dataset consists of the following attributes: a. Name of the country from which sample obtained b. Alcohol consumption measured as liters of wine, per capita c. Number of deaths from alcohol consumption, per 100,000 people d. Number of heart disease deaths, per 100,000 people e. Number of deaths from liver diseases, also per 100,000 people Now, we can use this dataset to test for various hypotheses or the relation between various attributes, such as the relation between number of deaths and the amount of alcohol consumption, the relation between thenumber of fatal heart disease cases and the amount of wine consumed, etc. But, to build an effective analysis (more on this in later chapters), first we need to prepare the dataset. Here is how we are going to do it: 1. Data Cleaning. In this stage, we will go through the following pre-processing steps: 52 Data • Smooth Noisy Data. We can see that the wine consumption value for Iceland per capita is −0.800000012. However, wine consumption values per capita cannot be negative. Therefore, it must be a faulty entry and we should change the alcohol consumption for Iceland to 0.800000012. Using the same logic, the number of deaths for Israel should be converted from −834 to 834. • Handling Missing Data. As we can see in the dataset, we have missing values (represented by NA – not available) of the number of cases of heart disease for Canada and number of cases of heart and liver disease for Spain. A simple workaround for this is to replace all the NAs with some common values, such as zero or average of all the values for that attribute. Here, we are going to use the average of the attribute for handling the missing values. So, for both Canada and Spain, we will use the value of 185 as number of heart diseases. Likewise, the number of liver diseases for Spain is replaced by 20.27. It is important to note: depending on the nature of the problem, it may not be a good idea to replace all of the NAs with the same value. A better solution would be to derive the value of the missing attribute from the values of other attributes of that data point. • Data Wrangling. As previously discussed, data wrangling is the process of manually converting or mapping data from one “raw” form into another format. For example, it may happen that, for a country, we have the value of the number of deaths as per 10,000, and not per 100,000, as other countries. In that case, we need to transform the value of the number of deaths for that country into per 100,000, or the same for every other country into 10,000. Fortunately for us, this dataset does not involve any data wrangling steps. So, at the end of this stage the dataset would look like what we see in Table 2.4. Table 2.3 Excessive wine consumption and mortality data. # Country Alcohol Deaths Heart Liver 1 Australia 2.5 785 211 15.30000019 2 Austria 3.000000095 863 167 45.59999847 3 Belg. and Lux. 2.900000095 883 131 20.70000076 4 Canada 2.400000095 793 NA 16.39999962 5 Denmark 2.900000095 971 220 23.89999962 6 Finland 0.800000012 970 297 19 7 France 9.100000381 751 11 37.90000153 8 Iceland −0.800000012 743 211 11.19999981 9 Ireland 0.699999988 1000 300 6.5 10 Israel 0.600000024 −834 183 13.69999981 11 Italy 27.900000095 775 107 42.20000076 12 Japan 1.5 680 36 23.20000076 13 Netherlands 1.799999952 773 167 9.199999809 14 New Zealand 1.899999976 916 266 7.699999809 15 Norway 0.0800000012 806 227 12.19999981 16 Spain 6.5 724 NA NA 17 Sweden 1.600000024 743 207 11.19999981 18 Switzerland 5.800000191 693 115 20.29999924 19 UK 1.299999952 941 285 10.30000019 20 US 1.200000048 926 199 22.10000038 21 West Germany 2.700000048 861 172 36.70000076 53 2.4 Data Pre-processing 2. Data Integration. Now let us assume we have another dataset (fictitious) collected from a different source, which is about alcohol consumption and number of related fatalities across various states of India, as shown in Table 2.5. Table 2.4 Wine consumption vs. mortality data after data cleaning. # Country Alcohol Deaths Heart Liver 1 Australia 2.5 785 211 15.30000019 2 Austria 3.000000095 863 167 45.59999847 3 Belg. and Lux. 2.900000095 883 131 20.70000076 4 Canada 2.400000095 793 185 16.39999962 5 Denmark 2.900000095 971 220 23.89999962 6 Finland 0.800000012 970 297 19 7 France 9.100000381 751 11 37.90000153 8 Iceland 0.800000012 743 211 11.19999981 9 Ireland 0.699999988 1000 300 6.5 10 Israel 0.600000024 834 183 13.69999981 11 Italy 27.900000095 775 107 42.20000076 12 Japan 1.5 680 36 23.20000076 13 Netherlands 1.799999952 773 167 9.199999809 14 New Zealand 1.899999976 916 266 7.699999809 15 Norway 0.0800000012 806 227 12.19999981 16 Spain 6.5 724 185 20.27 17 Sweden 1.600000024 743 207 11.19999981 18 Switzerland 5.800000191 693 115 20.29999924 19 UK 1.299999952 941 285 10.30000019 20 US 1.200000048 926 199 22.10000038 21 West Germany 2.700000048 861 172 36.70000076 Table 2.5 Data about alcohol consumption and health from various States in India. # Name of the State Alcohol consumption Heart disease Fatal alcohol- related accidents 1 Andaman and Nicobar Islands 1.73 20,312 2201 2 Andhra Pradesh 2.05 16,723 29,700 3 Arunachal Pradesh 1.98 13,109 11,251 4 Assam 0.91 8532 211,250 5 Bihar 3.21 12,372 375,000 6 Chhattisgarh 2.03 28,501 183,207 7 Goa 5.79 19,932 307,291 54 Data Here is what the dataset contains: A. Name of the State. B. Liters of alcohol consumed per capita. C. Number of fatal heart diseases, measured per 1,000,000 people. D. Number of fatal accidents related to alcohol per 1,000,000 people. Now we can use this dataset to integrate the attributes for India into our original dataset. To do this, we calculate the total alcohol consumption for the country of India as an average of alcohol consumption for all the states, which is 2.95. Similarly, we can calculate the fatal heart diseases per 100,000 people for India as 171 (approximated to the nearest integer value). Since we do not have any source for the number of total deaths or the number of fatal liver diseases for India, we are going to handle these the same way we previously addressed any missing values. The resultant dataset is shown in Table 2.6. Note that some of the assumptions we have made here before using this external dataset are for our own cause. First, when we are using the average of the alcohol consumption for these States as the amount of alcohol consumption for India, we are assuming that: (a) the populations of these States are the same or Table 2.6 Wine consumption and associated mortality after data integration. # Country Alcohol Deaths Heart Liver 1 Australia 2.5 785 211 15.30000019 2 Austria 3.000000095 863 167 45.59999847 3 Belg. and Lux. 2.900000095 883 131 20.70000076 4 Canada 2.400000095 793 185 16.39999962 5 Denmark 2.900000095 971 220 23.89999962 6 Finland 0.800000012 970 297 19 7 France 9.100000381 751 11 37.90000153 8 Iceland 0.800000012 743 211 11.19999981 9 Ireland 0.699999988 1000 300 6.5 10 Israel 0.600000024 834 183 13.69999981 11 Italy 27.900000095 775 107 42.20000076 12 Japan 1.5 680 36 23.20000076 13 Netherlands 1.799999952 773 167 9.199999809 14 New Zealand 1.899999976 916 266 7.699999809 15 Norway 0.0800000012 806 227 12.19999981 16 Spain 6.5 724 185 20.27 17 Sweden 1.600000024 743 207 11.19999981 18 Switzerland 5.800000191 693 115 20.29999924 19 UK 1.299999952 941 285 10.30000019 20 US 1.200000048 926 199 22.10000038 21 West Germany 2.700000048 861 172 36.70000076 22 India 2.950000000 750 171 20.27 55 2.4 Data Pre-processing at least similar; (b) the sample of these States is similar to the whole population of India; and (c) the wine consumption is roughly equivalent to the total alcohol consumption value in India; even though in reality, the wine consumption per capita should be less than the total alcohol consumption per capita, as there are other kinds of alcoholic beverages in the market. 3. Data Transformation. As previously mentioned, the data transformation process involves one or more of smoothing, removing noise from data, summarization, generalization, and normalization. For this example, we will employ smoothing, which is simpler than summarization and normalization. As we can see, in our data the wine consumption per capita for Italy is unusually high, whereas the same for Norway is unusually low. So, chances are these are outliers. In this case we will replace the value of wine consumption for Italy with 7.900000095. Similarly, for Norway we will use the value of 0.800000012 in place of 0.0800000012.We are treating both of these potential errors as “equipment error” or “entry error,” which resulted in an extra digit for both of these countries (extra “2” in front for Italy and extra “0” after the decimal point for Norway). This is a reasonable assumption given the limited context we have about the dataset. A more practical approach would be to look at the nearest geolocation for which we have the values and use that value to make predictions about the countries with erroneous entries. So, at the end of this step the dataset will be transformed into what is shown in Table 2.7. Table 2.7 Wine consumption and associated mortality dataset after data transformation. # Country Alcohol Deaths Heart Liver 1 Australia 2.5 785 211 15.30000019 2 Austria 3.000000095 863 167 45.59999847 3 Belg. and Lux. 2.900000095 883 131 20.70000076 4 Canada 2.400000095 793 185 16.39999962 5 Denmark 2.900000095 971 220 23.89999962 6 Finland 0.800000012 970 297 19 7 France 9.100000381 751 11 37.90000153 8 Iceland 0.800000012 743 211 11.19999981 9 Ireland 0.699999988 1000 300 6.5 10 Israel 0.600000024 834 183 13.69999981 11 Italy 7.900000095 775 107 42.20000076 12 Japan 1.5 680 36 23.20000076 13 Netherlands 1.799999952 773 167 9.199999809 14 New Zealand 1.899999976 916 266 7.699999809 15 Norway 0.800000012 806 227 12.19999981 16 Spain 6.5 724 185 20.27 17 Sweden 1.600000024 743 207 11.19999981 18 Switzerland 5.800000191 693 115 20.29999924 19 UK 1.299999952 941 285 10.30000019 20 US 1.200000048 926 199 22.10000038 21 West Germany 2.700000048 861 172 36.70000076 22 India 2.950000000 750 171 20.27 56 Data 4. Data Reduction. The process of data reduction is aimed at producing a reduced representation of the dataset that can be used to obtain the same or similar analytical results. For our example, the sample is relatively small, with only 22 rows. Now imagine that we have values for all 196 countries in the world, and the geospatial values, for which the attribute values are available, are stated. In that case, the number of rows is large, and, depending on the limited processing and storage capacity you have at your disposal, it may make more sense to round up the alcohol consumption per capita to two decimal places. Each extra decimal place for every data point in such a large dataset will need a significant amount of storage capacity. Thus, reducing the liver column to one decimal place and the alcohol consumption column to two decimal places would result in the dataset shown in Table 2.8. Note that data reduction does not mean just reducing the size of attributes – it also may involve removing some attributes, which is known as feature space selection. For example, if we are interested in the relation between the wine consumed and number of casualties from heart disease, we may opt to remove the attribute “number of liver diseases” if we assume that there is no relation between number of heart disease fatalities and number of lung disease fatalities. Table 2.8 Wine consumption and associated mortality dataset after data reduction. # Country Alcohol Deaths Heart Liver 1 Australia 2.50 785 211 15.3 2 Austria 3.00 863 167 45.6 3 Belg. and Lux. 2.90 883 131 20.7 4 Canada 2.40 793 185 16.4 5 Denmark 2.90 971 220 23.9 6 Finland 0.80 970 297 19.0 7 France 9.10 751 11 37.9 8 Iceland 0.80 743 211 11.2 9 Ireland 0.70 1000 300 6.5 10 Israel 0.60 834 183 13.7 11 Italy 7.90 775 107 42.2 12 Japan 1.50 680 36 23.2 13 Netherlands 1.80 773 167 9.2 14 New Zealand 1.90 916 266 7.7 15 Norway 0.80 806 227 12.2 16 Spain 6.50 724 185 20.3 17 Sweden 1.60 743 207 11.2 18 Switzerland 5.80 693 115 20.3 19 UK 1.30 941 285 10.3 20 US 1.20 926 199 22.1 21 West Germany 2.70 861 172 36.7 22 India 2.95 750 171 20.3 57 2.4 Data Pre-processing 5. Data Discretization. As we can see, all the attributes involved in our dataset are continuous type (values in real numbers). However, depending on the model you want to build, you may have to discretize the attribute values into binary or categorical types. For example, you may want to discretize the wine consumption per capita into four categories – less than or equal to 1.00 per capita (represented by 0), more than 1.00 but less than or equal to 2.00 per capita (1), more than 2.00 but less than or equal to 5.00 per capita (2), and more than 5.00 per capita (3). The resultant dataset should look like that shown in Table 2.9. And that is the end result of this exercise. Yes, it may seem that we did not conduct real data processing or analytics. But through our pre-processing techniques, we have managed to prepare a much better and meaningful dataset. Often, that itself is half the battle. Having said that, for most of the book we will focus on the other half of the battle – processing, visualizing, analyzing the data for solving problems, and making decisions. Nonetheless, I hope the sections on data pre-processing and the hands-on exercise we did here has given some insights into what needs to occur before you get your hands on nice-looking data for processing. Table 2.9 Wine consumption and mortality dataset at the end of pre-processing. # Country Alcohol Deaths Heart Liver 1 Australia 2 785 211 15.3 2 Austria 2 863 167 45.6 3 Belg. and Lux. 2 883 131 20.7 4 Canada 2 793 185 16.4 5 Denmark 2 971 220 23.9 6 Finland 0 970 297 19.0 7 France 3 751 11 37.9 8 Iceland 0 743 211 11.2 9 Ireland 0 1000 300 6.5 10 Israel 0 834 183 13.7 11 Italy 3 775 107 42.2 12 Japan 1 680 36 23.2 13 Netherlands 1 773 167 9.2 14 New Zealand 1 916 266 7.7 15 Norway 0 806 227 12.2 16 Spain 3 724 185 20.3 17 Sweden 1 743 207 11.2 18 Switzerland 3 693 115 20.3 19 UK 1 941 285 10.3 20 US 1 926 199 22.1 21 West Germany 2 861 172 36.7 22 India 2 750 171 20.3 58 Data Try It Yourself 2.1: Data Pre-processing Imagine you want to open a new bakery, and you are trying to figure out which item in the menu will help you to keep maximum profit margin. You have the following few options: • For cookies, you would need flour, chocolate, butter, and other ingredients, which comes at $3.75 per pound (lb). The initial setup cost is $1580, while the labor charge is another $30 per hour. In one hour you can serve two batches of cookies, while making 250 cookies per batch. Each batch requires 15 lb of ingredients, and each cookie can be priced at $2. • For your second option, you can make cake with the same ingredients. However, the ratio of the ingredients being different, it will cost you $4 per lb. The initial setup cost is $2000, while the labor charge remains the same. However, baking two batches of cake will require 3 hours in total, with five cakes in each batch. Each cake when baked with 2 lb of ingredient can be sold at $34. • In the third option, you can make bagels in your shop, which will require flour, butter and other ingredients, which will cost you $2.50 per lb. The initial setup cost is low, at $680, as is the labor cost, $25 per hour. In one batch, using 20 lb of ingredients, you can make 300 bagels in 45 minutes. Each bagel can be sold at $1.75. • For the fourth and final option, you can bake loaves of bread, where you will need only flour and butter for the ingredients, which will cost you $3 per lb. The initial setup cost is marginal between $270 and $350; however, the labor charge is high, $40 per hour. However, you can bake as many as 1000 loaves in 2 hours; each can be priced at $3. Use this information to create a dataset that can be used to decide the menu for your bakery. Summary Many of the examples of data we have seen so far have been in nice tables, but it should be clear by now that data appears in many forms, sizes, and formats. Some are stored in spreadsheets, and others are found in text files. Some are structured, and some are unstructured. In this book, most data we will deal with are found in text format, but there are plenty of data out there in image, audio, and video formats. As we saw, the process of data processing is morecomplicated if there is missing or corrupt data, and some data may need cleaning or converting before we can even begin to do any processing with it. This requires several forms of pre-processing. Some data cleaning or transformation may be required, and some may depend on our purpose, context, and availability of analysis tools and skills. For instance, if you know SQL (a program covered in Chapter 7) and want to take advantage of this effective and efficient query language, you may want to import your CSV-formatted data into a MySQL database, even if that CSV data has no “issues.” Data pre-processing is so important that many organizations have specific job positions just for this kind of work. These people are expected to have the skills to do all the stages 59 Summary described in this chapter: from cleaning to transformation, and even finding or approximat- ing the missing or corrupt values in a dataset. There is some technique, some science, and much engineering involved in this process. But it is a very important job, because, without having the right data in the proper format, almost all that follows in this book would be impossible. To put it differently – before you jump to any of the “fun” analyses here, make sure you have at least thought about whether your data needs any pre-processing, otherwise you may be asking the right question of the wrong data! Key Terms • Structured data: Structured data is highly organized information that can be seamlessly included in a database and readily searched via simple search operations. • Unstructured data:Unstructured data is information devoid of any underlying structure. • Open data:Data that is freely available in a public domain that can be used by anyone as they wish, without restrictions from copyright, patents, or other mechanisms of control. • Application Programming Interface (API): A programmatic way to access data. A set of rules and methods for asking and sending data. • Outlier: A data point that is markedly different in value from the other data points of the sample. • Noisy data: The dataset has one or more instances of errors or outliers. • Nominal data: The data type is nominal when there is no natural order between the possible values, for example, colors. • Ordinal data: If the possible values of a data type are from an ordered set, then the type is ordinal. For example, grades in a mark sheet. • Continuous data:A continuous data is a data type that has an infinite number of possible values. For example, real numbers. • Data cubes: They are multidimensional sets of data that can be stored in a spreadsheet. A data cube could be in two, three, or higher dimensions. Each dimension typically represents an attribute of interest. • Feature space selection:Amethod for selecting a subset of features or columns from the given dataset as a way to do data reduction. Conceptual Questions 1. List at least two differences between structured and unstructured data. 2. Give three examples of structured data formats. 3. Give three examples of unstructured data formats. 4. How will you convert a CSV file to a TSV file? List at least two different strategies. 60 Data 5. You are looking at employee records. Some have no middle name, some have a middle initial, and others have a complete middle name. How do you explain such inconsis- tency in the data? Provide at least two explanations. Hands-On Problems Problem 2.1 The following dataset, obtained from OA 2.2, contains statistics in arrests per 100,000 residents for assault and murder, in each of the 50 US states, in 1973. Also given is the percentage of the population living in urban areas. Murder Assault Urban population (%) Alabama 13.2 236 58 Alaska 10 263 48 Arizona 8.1 294 80 Arkansas 8.8 190 50 California 9 276 91 Colorado 7.9 204 78 Connecticut 3.3 110 77 Delaware 5.9 238 72 Florida 15.4 335 80 Georgia 17.4 60 Hawaii 5.3 46 83 Idaho 2.6 120 54 Illinois 10.4 249 83 Indiana 7.2 113 65 Iowa 2.2 56 570 Kansas 6 115 66 Kentucky 9.7 109 52 Louisiana 15.4 249 66 Maine 2.1 83 51 Maryland 11.3 300 67 Massachusetts 4.4 149 85 Michigan 12.1 255 74 Minnesota 2.7 72 66 Mississippi 16.1 259 44 Missouri 9 178 70 Montana 6 109 53 Nebraska 4.3 102 62 Nevada 12.2 252 81 61 Hands-On Problems (Cont.) Murder Assault Urban population (%) New Hampshire 2.1 57 56 New Jersey 7.4 159 89 New Mexico 11.4 285 70 New York 11.1 254 6 North Carolina 13 337 45 North Dakota 0.8 45 44 Ohio 7.3 120 75 Oklahoma 6.6 151 68 Oregon 4.9 159 67 Pennsylvania 6.3 106 72 Rhode Island 3.4 174 87 South Carolina 14.4 879 48 South Dakota 3.8 86 45 Tennessee 13.2 188 59 Texas 12.7 201 80 Utah 3.2 120 80 Vermont 2.2 48 32 Virginia 8.5 156 63 Washington 4 145 73 West Virginia 5.7 81 39 Wisconsin 2.6 53 66 Wyoming 6.8 161 60 Now, use the pre-processing techniques at your disposal to prepare the dataset for analysis. a. Address all the missing values. b. Look for outliers and smooth noisy data. c. Prepare the dataset to establish a relation between an urban population category and a crime type. [Hint: Convert the urban population percentage into categories, for example, small (<50%), medium (<60%), large (<70%), and extra-large (70% and above) urban population.] Problem 2.2 The following is a dataset of bridges in Pittsburgh. The original dataset was prepared by Yoram Reich and Steven J. Fenves, Department of Civil Engineering and Engineering Design Research Center, Carnegie Mellon University, and is available from OA 2.3. ID Purpose Length Lanes Clear T or D Material Span Rel-L E1 Highway ? 2 N Through Wood Short S 62 Data (Cont.) ID Purpose Length Lanes Clear T or D Material Span Rel-L E2 Highway 1037 2 N Through Wood Short S E3 Aqueduct ? 1 N Through Wood ? S E5 Highway 1000 2 N Through Wood Short S E6 Highway ? 2 N Through Wood ? S E7 Highway 990 2 N Through Wood Medium S E8 Aqueduct 1000 1 N Through Iron Short S E9 Highway 1500 2 N Through Iron Short S E10 Aqueduct ? 1 N Deck Wood ? S E11 Highway 1000 2 N Through Wood Medium S E12 RR ? 2 N Deck Wood ? S E14 Highway 1200 2 N Through Wood Medium S E13 Highway ? 2 N Through Wood ? S E15 RR ? 2 N Through Wood ? S E16 Highway 1030 2 N Through Iron Medium S-F E17 RR 1000 2 N Through Iron Medium ? E18 RR 1200 2 N Through Iron Short S E19 Highway 1000 2 N Through Wood Medium S E20 Highway 1000 2 N Through Wood Medium S E21 RR ? 2 ? Through Iron ? ? E23 Highway 1245 ? ? Through Steel Long F E22 Highway 1200 4 G Through Wood Short S E24 RR ? 2 G ? Steel ? ? E25 RR ? 2 G ? Steel ? ? E27 RR ? 2 G Through Steel ? F E26 RR 1150 2 G Through Steel Medium S E30 RR ? 2 G Through Steel Medium F E29 Highway 1080 2 G Through Steel Medium ? E28 Highway 1000 2 G Through Steel Medium S E32 Highway ? 2 G Through Iron Medium F E31 RR 1161 2 G Through Steel Medium S E34 RR 4558 2 G Through Steel Long F E33 Highway 1120 ? G Through Iron Medium F E36 Highway ? 2 G Through Iron Short F E35 Highway 1000 2 G Through Steel Medium F Use this dataset to complete the following tasks: a. Address all the missing values. b. Look for outliers and smooth noisy data. c. Prepare the dataset to establish a relation among: i. Length of the bridge and its purpose. ii. Number of lanes and its materials. iii. Span of the bridge and number of lanes. 63 Hands-On Problems Problem 2.3 The following is a dataset that involves child mortality rate and is inspired by data collected from UNICEF. The original dataset is available from OA 2.4. According to the report, the world has achieved substantial success in reducing child mortality during the last few decades. According to the UNICEF report, globally the under-five age mortality rate has decreased from 93 deaths per 1000 live births in 1990 to less than 50 in 2016. Year Under-five mortality rate Infant mortality rate Neonatal mortality rate 1990 93.4 64.8 36.8 1991 92.1 63.9 36.3 1992 90.9 63.1 35.9 1993 89.7 62.3 35.4 1994 88.7 61.4 1995 87.3 60.534.4 1996 85.6 59.4 33.7 1997 58.2 33.1 1998 82.1 56.9 32.3 1999 79.9 55.4 31.5 2000 77.5 53.9 30.7 2001 74.8 52.1 29.8 2002 72 28.9 2003 69.2 48.6 28 2004 66.7 46.9 2005 45.1 26.1 2006 61.1 43.4 25.3 2007 58.5 24.4 2008 56.2 40.3 23.6 2009 53.7 38.8 22.9 2010 37.4 22.2 2011 49.3 36 21.5 2012 47.3 34.7 20.8 2013 45.5 33.6 20.2 2014 43.7 19.6 2015 42.2 31.4 19.1 2016 40.8 30.5 18.6 However, as you can see, the dataset has a number of missing instances, which need to be fixed before a clear progress on child mortality can be explained from the year of 1990 to 2016. Use this dataset to complete the following tasks: a. Address all the missing values using the techniques at your disposal. b. Prepare the dataset to establish the following relations: i. Under-five mortality rate and neonatal mortality rate. 64 Data ii. Infant mortality late and neonatal mortality rate. iii. Year and infant mortality rate. [Hints: You may think of converting the mortality rates into five-point Likert scale values. You may count the year before this dataset (i.e., 1989) as the starting point of this program, to assess the progress we have made as the years have passed.] Further Reading and Resources • Bellinger, G., Castro, D., & Mills, A. Data, information, knowledge, and wisdom: http:// www.systems-thinking.org/dikw/dikw.htm • US Government Open Data Policy: https://project-open-data.cio.gov/ • Developing insights from social media data: https://sproutsocial.com/insights/social- media-data/ • Social Media Data Analytics course on Coursera by the author: https://www.coursera.org /learn/social-media-data-analytics Notes 1. Statistics Canada. Definitions: http://www.statcan.gc.ca/edu/power-pouvoir/ch1/definitions/ 5214853-eng.htm 2. BrightPlanet®. Structured vs. unstructured data definition: https://brightplanet.com/2012/06/ structured-vs-unstructured-data/ 3. US Government data repository: https://www.data.gov/ 4. City of Chicago data repository: https://data.cityofchicago.org/ 5. US Government policy M-13-3: https://project-open-data.cio.gov/policy-memo/ 6. Project Open Data “open license”: https://project-open-data.cio.gov/open-licenses/ 7. Facebook Graph API: https://developers.facebook.com/docs/graph-api/ 8. Yelp dataset challenge: https://www.yelp.com/dataset/challenge 9. SPM created by Karl Friston: https://en.wikipedia.org/wiki/Karl_Friston 10. UCL SPM website: http://www.fil.ion.ucl.ac.uk/spm/ 11. UF Health. UF Biostatistics open learning textbook: http://bolt.mph.ufl.edu/2012/08/02/learn-by -doing-exploring-a-dataset/ 12. An actual tab will appear as simply a space. To aid clarity, in this book we are explicitly spelling out <TAB>. Therefore, wherever you see in this book <TAB>, in reality an actual tab would appear as a space. 13. XUL.fr. Really Simple Syndication definition: http://www.xul.fr/en-xml-rss.html 14. w3schools. XML RSS explanation and example: https://www.w3schools.com/xml/xml_rss.asp 15. FEED Validator: http://www.feedvalidator.org 16. Google: submit your content: http://www.google.com/submityourcontent/website-owner 17. Bing submit site: http://www.bing.com/toolbox/submit-site-url 18. JSON: http://www.json.org/ 19. w3schools. JSON introduction: http://www.w3schools.com/js/js_json_intro.asp 20. KDnuggets™ introduction to data mining course: http://www.kdnuggets.com/data_mining_course/ 21. Data cleaning and pre-processing presentation: http://www.mimuw.edu.pl/~son/datamining/ DM/4-preprocess.pdf 65 Hands-On Problems 3 Techniques “Information is the oil of the 21st century, and analytics is the combustion engine.” — Peter Sondergaard, Senior Vice President, Gartner Research What do you need? • Computational thinking (refer to Chapter 1). • Knowledge of basic math operations, including exponents and roots. • A basic understanding of linear algebra (e.g., line representation and line equation). • Access to a spreadsheet program such as Microsoft Excel or Google Sheets. What will you learn? • Various forms of data analysis and analytics techniques. • A simple introduction to correlation and regression. • How to undertake simple summaries and presentation of numerical and categorical data. 3.1 Introduction There are many tools and techniques that a data scientist is expected to know or acquire as problems arise. Often, it is hard to separate tools and techniques. One whole section of this book (four chapters) is dedicated to teaching how to use various tools, and, as we learn about them, we also pick up and practice some essential techniques. This happens for two reasons. The first one is already mentioned here – it is hard to separate tools from techniques. Regarding the second reason – since our main purpose is not necessarily to master any programming tools, we will learn about programming languages and platforms in the context of solving data problems. That said, there are aspects of data science-related techniques that are better studied without worrying about any particular tool or programming language. And that is the approach we will pursue. In this chapter, we will review some basic techniques used in data science and see how they are used for performing analytics and data analyses. We will begin by considering some differences and similarities between data analysis and data analytics. Often, it is not critical to ignore their differences, but here we will see how distinguishing the two might be important. For the rest of the chapter we will look at various forms of analyses: descriptive, diagnostic, predictive, prescriptive, exploratory, and 66 mechanistic. In the process we will be reviewing basic statistics. That should not surprise you, as data science is often considered just a fancy term for statistics! As we learn about these tools and techniques, we will also look at some examples and gain experience using real data analysis (though it will be limited due to our lack of knowledge about any programming or specialized tools as of this chapter). 3.2 Data Analysis and Data Analytics These two terms – data analysis and data analytics – are often used interchangeably and could be confusing. Is a job that calls for data analytics really talking about data analysis and vice versa? Well, there are some subtle but important differences between analysis and analytics. A lack of understanding can affect the practitioner’s ability to leverage the data to their best advantage.1 According to Dave Kasik, Boeing’s Senior Technical Fellow in visualization and inter- active techniques, “In my terminology, data analysis refers to hands-on data exploration and evaluation. Data analytics is a broader term and includes data analysis as [a] necessary subcomponent. Analytics defines the science behind the analysis. The science means understanding the cognitive processes an analyst uses to understand problems and explore data in meaningful ways.”2 One way to understand the difference between analysis and analytics is to think in terms of past and future. Analysis looks backwards, providing marketers with a historical view of what has happened. Analytics, on the other hand, models the future or predicts a result. Analytics makes extensive use of mathematics and statistics and the use of descriptive techniques and predictive models to gain valuable knowledge from data. These insights from data are used to recommend action or to guide decision-making in a business context. Thus, analytics is not so much concerned with individual analysis or analysis steps, but with the entire methodology. There is no clear agreeable-to-all classification scheme available in the literature to categorize all the analysis techniques that are used by data science professionals. However, based on their application on various stages of data analysis, I have categorized analysis techniques into six classes of analysis and analytics: descriptive analysis, diag- nostic analytics, predictive analytics, prescriptive analytics, exploratory analysis, and mechanistic analysis. Each ofthese and their applications are described below. 3.3 Descriptive Analysis Descriptive analysis is about: “What is happening now based on incoming data.” It is a method for quantitatively describing the main features of a collection of data. Here are a few key points about descriptive analysis: 67 3.3 Descriptive Analysis • Typically, it is the first kind of data analysis performed on a dataset. • Usually it is applied to large volumes of data, such as census data. • Description and interpretation processes are different steps. Descriptive analysis can be useful in the sales cycle, for example, to categorize customers by their likely product preferences and purchasing patterns. Another example is the Census Data Set, where descriptive analysis is applied on a whole population (see Figure 3.1). Researchers and analysts collecting quantitative data or translating qualitative data into numbers are often faced with a large amount of raw data that needs to be organized and summarized before it can be analyzed. Data can only reveal patterns and allow observers to draw conclusions when it is presented as an organized summary. Here is where descriptive statistics come into play: they facilitate analyzing and summarizing the data and are thus instrumental to processes inherent in data science. Data cannot be properly used if it is not correctly interpreted. This requires appropriate statistics. For example, should we use the mean, median, or mode, two of these, or all three?4 Each of these measures is a summary that emphasizes certain aspects of the data and overlooks others. They all provide information we need to get a full picture of the world we are trying to understand. The process of describing something requires that we extract its important parts: to paint a scene, an artist must first decide which features to highlight. Similarly, humans often point out significant aspects of the world with numbers, such as the size of a room, the population of a State (as in Figure 3.1), or the Scholastic Aptitude Test (SAT) score of a high-school senior. Nouns name these things, or characteristic areas, populations, and verbal learning abilities. To describe these features, English speakers use adjectives, for example, decent-sized room, small- town population, bright high-school senior. But numbers can replace these words: 100 sq. ft. room, Florida population of 18,801,318, or a senior with a verbal score of 800. Numerical representation can hold a considerable advantage over words. Numbers allow humans to more precisely differentiate between objects or concepts. For example, two rooms may be described as “small,” but numbers distinguish a 9-foot expanse from a 10- foot expanse. One could argue that even imperfect measuring instruments afford more levels of differentiation than adjectives. And, of course, numbers can modify words by providing a count of units (2500 persons), indicating a rank (third most populated city in the country, as shown in the inset box in Figure 3.1), or placing the characteristics on some scale (SAT score of 800, with a mean of 600). 3.3.1 Variables Before we process or analyze any data, we have to be able to capture and represent it. This is done with the help of variables. Avariable is a label we give to our data. For instance, you can write down age values of all your cousins in a table or a spreadsheet and label that column with “age.” Here, “age” is a variable and it is of type numeric (and “ratio” type as we will soon see). If we then want to identify who is a student or not, we can create another column, and next to each cousin’s name we can write down “yes” or “no” under a new column called “student.” Here, “student” is a variable and it is of type categorical (more on that soon). 68 Techniques Figure 3.1 Census data as a way to describe the population.3 Since a lot ofwhatwewill do in this book (and perhapswhat youwill do in a data science job) will deal with different forms of numerical information, let us look further into such variables. Numeric information can be separated into distinct categories that can be used to summarize data. Thefirst stage of summarizing any numeric information is to identify the category towhich it belongs. For example, the above section covered three operations for numbers: counting, ranking, and placing on a scale. Each of these corresponds to different levels of measurement. So, if people are classified based on their racial identities, statisticians can name the categories and count their contents. Such use defines the categorical variable. Think about animal taxonomy that biologists use – the one with mammals, reptiles, etc. Those represent categorical levels. If we find it convenient to represent such categories using numbers, this becomes a nominal variable. Essentially here, we are using numbers to represent categories, but cannot use those numbers for any meaningful mathematical or statistical operations. If we can differentiate among individuals within a group, we can use an ordinal variable to represent those values. For example, we can rank a selection of people in terms of their apparent communication skill. But this statistic can only go so far; it cannot, for example, create an equal-unit scale. What this means is that, while we could order the entities, there is no enhanced meaning to that order. For instance, we cannot just subtract someone at rank 5 with someone at rank 3 and say that the difference is what is represented by someone at rank 2. For that, we turn to an interval variable. Let us think about the measurement of temperature. We do it in Fahrenheit or Celsius. If the temperature is measured as 40 degrees Fahrenheit on a given day, that measure is placed on a scale with an actual zero point (i.e., 0 degrees Fahrenheit). If the next day the temperature is 45 degrees Fahrenheit, we can say that the temperature has risen by 5 degrees (that is the difference). And 5 degrees Fahrenheit has physical meaning, unlike what happens with an ordinal level of measurement. This kind of scenario describes an interval level of measurement. Put another way, an interval level of measurement allows us to do additions and subtractions but not multiplications or divisions. What does that mean? It means we cannot talk about doubling or halving temperature. OK, well, we could, but that multiplication or division has no physical meaning. Water evaporates at 100 degrees Celsius, but at 200 degrees Celsius water does not evaporate twice as much or twice as fast. For multiplication and division (as well as addition and subtraction), we turn to a ratio variable. This is common in physical sciences and engineering. Examples include length (feet, yards, meters), and weight (pounds, kilograms). If a pound of grapes costs $5, two pounds will cost $10. If you have 4 yards of fabric, you can give 2 yards each to two of your friends. All of these categories of variables are fine when we are dealing with one variable at a time and doing descriptive analysis. But when we are trying to connect multiple variables or using one set of variables to make predictions about another set, we may want to classify them with some other names. A variable that is thought to be controlled or not affected by other variables is called an independent variable. A variable that depends on other variables (most often other independent variables) is called a dependent variable. In the case of a prediction problem, an independent variable is also called a predictor variable and a dependent variable is called an outcome variable. For instance, imagine we have data about tumor size for some patients and whether the patients have cancer or not. This could be in a table with two columns: “tumor size” and 70 Techniques “cancer,” the former being a ratio type variable (we can talk about one tumor being twice the size of another), and the latter being a categorical type variable (“yes”, “no” values). Now imagine we want to use the “tumor size” variable to saysomething about the “cancer” variable. Later in this book we will see how something like this could be done under a class of problems called “classification.” But for now, we can think of “tumor size” as an independent or a predictor variable and “cancer” as a dependent or an outcome variable. 3.3.2 Frequency Distribution Of course, data needs to be displayed. Once some data has been collected, it is useful to plot a graph showing how many times each score occurs. This is known as a frequency distribution. Frequency distributions come in different shapes and sizes. Therefore, it is important to have some general descriptions for common types of distribution. The following are some of the ways in which statisticians can present numerical findings. Histogram. Histograms plot values of observations on the horizontal axis, with a bar showing how many times each value occurred in the dataset. Let us take a look at an example of how a histogram can be crafted out of a dataset. Table 3.1 represents Productivity measured in terms of output for a group of data science professionals. Some of them went through extensive statistics training (represented as “Y” in the Training column) while others did not (N). The dataset also contains the work experience (denoted as Experience) of each professional in terms of number of working hours. Table 3.1 Productivity dataset. Productivity Experience Training 5 1 Y 2 0 N 10 10 Y 4 5 Y 6 5 Y 12 15 Y 5 10 Y 6 2 Y 4 4 Y 3 5 N 9 5 Y 8 10 Y 11 15 Y 13 19 Y 4 5 N 5 7 N 7 12 Y 8 15 N 12 20 Y 3 5 N 15 20 Y 71 3.3 Descriptive Analysis Try It Yourself 3.1: Variables Before we continue with generating a histogram, let us use this table and make sure we have grasped the concepts about the variables from before. Answer the following questions using Table 3.1. 1. What kind of variable is “Productivity”? 2. What kind of variable is “Experience”? 3. What kind of variable is “Training”? 4. We are trying to understand if, by looking at “Productivity” and “Experience,” we could predict if someone went through training or not. In this scenario, identify independent or predictor variable(s) and dependent or outcome variable(s). Hands-On Example 3.1: Histogram A histogram can be created from the numbers in the Productivity column, as shown in Figure 3.2. Any spreadsheet program, for example, Microsoft Excel or Google Sheets, supports a host of visualization options, Table 3.1 (cont.) Productivity Experience Training 8 16 N 4 9 N 6 17 Y 9 13 Y 7 6 Y 5 8 N 14 18 Y 7 17 N 6 6 Y Histogram of productivity 8 6 4 F re qu en cy 2 0 2 4 6 8 Productivity 10 12 14 16 Figure 3.2 Histogram using the Productivity data. such as charts, plots, line graphs, maps, etc. If you are using a Google Sheet, the procedure to create the histogram is first to select the intended column, followed by selecting the option of “insert chart”, denoted by the icon in the toolbar, which will present you with the chart editor. In the editor, select the option of histogram chart in the chart type dropdown and it will create a chart like that in Figure 3.2. You can further customize the chart by specifying the color of the chart, the X-axis label, Y-axis label, etc. Try It Yourself 3.2: Histogram Let us test your understanding of histogram and related concepts on the pizza franchise dataset from the Business Opportunity Handbook. The dataset is available from OA 3.1, where X represents annual franchise fee in $100 and Y represents the startup cost in the same numeration. Using this data and your favorite spreadsheet program, plot the data to visualize the startup cost changes with the franchise cost. Hands-On Example 3.2: Pie Chart A histogram worked fine for numerical data, but what about categorical data? In other words, how do we visualize the data when it’s distributed in a few finite categories? We have such data in the third column called “Training.” For that, we can create a pie chart, as shown in Figure 3.3. You can follow the same process as for a histogram if you are using a Google Sheet. The key difference is here you have to select the pie chart as chart type from the chart editor. We will often be working with data that are numerical and we will need to understand how those numbers are spread. For that, we can look at the nature of that distribution. It turns out that, if the data is normally distributed, various forms of analyses become easy and straightforward. What is a normal distribution? Training Y N Figure 3.3 Pie chart showing the distribution of “Training” in the Productivity data. 73 3.3 Descriptive Analysis Normal Distribution. In an ideal world, data would be distributed symmetrically around the center of all scores. Thus, if we drew a vertical line through the center of a distribution, both sides should look the same. This so-called normal distribution is characterized by a bell-shaped curve, an example of which is shown in Figure 3.4. There are two ways in which a distribution can deviate from normal: • Lack of symmetry (called skew) • Pointiness (called kurtosis) As shown in Figure 3.5, a skewed distribution can be either positively skewed (Figure 3.5a) or negatively skewed (Figure 3.5b). Kurtosis, on the other hand, refers to the degree to which scores cluster at the end of a distribution (platykurtic) and how “pointy” a distribution is (leptokurtic), as shown in Figure 3.6. There are ways to find numbers related to these distributions to give us a sense of their skewedness and kurtosis, but we will skip that for now. At this point, we will leave the judgment of the normality of a distribution to our visual inspection of it using the histograms as shown here. As we acquire appropriate statistical tools in the next section of this book, we will see how to run some tests to find out if a distribution is normal or not. Try It Yourself 3.3: Distributions What does the shape of the histogram distribution from Try it Yourself 3.2 look like? What does this kind of shape inform us about the underlying data? Normal distribution 12 00 10 00 80 0 60 0 F re qu en cy 40 0 20 0 0 X-value 0.80.60.40.2 1.0 Figure 3.4 Example of a normal distribution. 74 Techniques 3.3.3 Measures of Centrality Often, one number can tell us enough about a distribution. This is typically a number that points to the “center” of a distribution. In other words, we can calculate where the “center” of a frequency distribution lies, which is also known as the central tendency. We put “center” in quotes because it depends how it is defined. There are three measures commonly used: mean, median, and mode. Positively skewed F re qu en cy 0 20 0 40 0 60 0 80 0 10 00 12 00 0.20.0 (a) 0.4 0.6 0.8 X-value Negatively skewed F re qu en cy 0 20 0 40 0 60 0 80 0 10 00 0.4 0.6 1.00.8 X-value 12 00 0.2 (b) Figure 3.5 Examples of skewed distributions. 75 3.3 Descriptive Analysis Mean. You have come across this before even if you have never done statistics. Mean is commonly known as average, though they are not exactly synonyms. Mean is most often used to measure the central tendency of continuous data as well as a discrete dataset. If there are n number of values in a dataset and the values are x1, x2, . . ., xn, then the mean is calculated as x ¼ x1 þ x2 þ x3 þ � � � þ xn n : ð3:1Þ Using the above formula, the mean of the Productivity column in Table 3.1 comes out to be 7.267. Go ahead and verify this. There is a significant drawback to using the mean as a central statistic: it is susceptible to the influence of outliers. Also, mean is only meaningful if the data is normally distributed, or at least close to looking like a normal distribution. Take the distribution of household income in the USA, for instance. Figure 3.7 shows this distribution, obtained from the US Census Bureau. Does that distribution look normal? No. A few people make a lot of money and a lot of people make very little money. This is a highly skeweddistribution. If you take the mean or average from this data, it will not be a good representation of income for this population. So, what can we do? We can use another measure of central tendency: median. Median. The median is the middle score for a dataset that has been sorted according to the values of the data.With an even number of values, themedian is calculated as the average of the middle two data points. For example, for the Productivity dataset, the median of Experience is 9.5. What about the US household income? The median income in the USA, as of 2014, is $53,700. That means half the people in the USA are making $53,700 or less and the other half are on the other side of that threshold. Mode. The mode is the most frequently occurring value in a dataset. On a histogram representation, the highest bar denotes the mode of the data. Normally, mode is used for categorical data; for example, for the Training component in the Productivity dataset, the most common category is the desired output. 1. 0 0. 8 0. 6 0. 4 0. 2 0. 0 fs (v ec , 0 , 0 .5 ) –5 50 vec Figure 3.6 Examples of different kurtosis in a distribution (orange dashed line represents leptokurtic, blue solid line represents the normal distribution, and red dotted line represents platykurtic). 76 Techniques As depicted in Figure 3.8, in the Productivity dataset, there are 10 instances of N and 20 instances of Yvalues in Training. So, in this case, the mode for Training is Y. [Note: If the number of instances of Y and N are the same, then there would be no mode for Training.] 3.3.4 Dispersion of a Distribution We saw in section 3.3.2 that distributions come in all shapes and sizes. Simply looking at a central point (mean, median, or mode) may not help in understanding the actual shape of Figure 3.7 Income distribution in the United States based on the latest census data available.5 20 15 10 5 co un t 0 N Training Y Figure 3.8 Visualizing mode for the Productivity data. 77 3.3 Descriptive Analysis a distribution. Therefore, we often look at the spread, or the dispersion, of a distribution. The following are some of the most common quantities for measures of dispersion. Range. The easiest way to look at the dispersion is to take the largest score and subtract it from the smallest score. This is known as the range. For the Productivity dataset, the range of the Productivity category would be 13. There is, however, a disadvantage to using the range value: because it uses only the highest and lowest values, extreme scores or outliers tend to result in an inaccurate picture of the more likely range. Interquartile Range.Oneway around the range’s disadvantage is to calculate it after removing extreme values. One convention is to cut off the top and bottom one-quarter of the data and calculate the range of the remainingmiddle 50%of the scores. This is known as the interquartile range. For example, the interquartile range of “Experience” in the Productivity dataset would be 10. Hands-On Example 3.3: Interquartile Range Can we easily find out interquartile range, and even visualize it? The answer is “yes.” Let us revisit the data in Table 3.1 and focus on the “Experience” column. If we sort it, we get Table 3.2. There are 30 numbers here and we are looking for the middle 15 numbers. That gives us numbers 5, 5, 5, 6, 6, 7, 8, 9, 10, 10, 10, 12, 13, 15, and 15. Now we can see that the range of these numbers is 10 (min = 5 to max = 15). And that is our interquartile range here. We could also visualize this whole process in something called boxplots. Figure 3.9 shows boxplots for the “Productivity” and “Experience” columns. Table 3.2 Sorted “Experience” column from the Productivity dataset. Experience 0 1 2 4 5 5 5 5 5 5 6 6 7 8 9 78 Techniques As shown in the boxplot for the “Experience” attribute, after removing the top one-fourth values (between 15 and 20) and bottom one-fourth (close to zero to 5), the range of the remaining data can be calculated as 10 (from 5 to 15). Likewise, the interquartile range of the “Productivity” attribute can be calculated as 5. Try It Yourself 3.4: Interquartile Range For this exercise, you are going to use the fire and theft data from same zip code of Chicago metropolitan area (reference: US Commission on Civil Rights). The dataset available from OA 3.2 has observations in pairs: Table 3.2 (cont.) Experience 10 10 10 12 13 15 15 15 16 17 17 18 19 20 20 20 15 10 5 0 Productivity Experience Figure 3.9 Boxplot for the “Productivity” and “Experience” columns of the Productivity dataset. 79 3.3 Descriptive Analysis X represents fires per 1000 housing units. Y is number of thefts per 1000 population. Use this dataset to calculate the interquartile ranges of X and Y. Variance. The variance is a measure used to indicate how spread out the data points are. To measure the variance, the common method is to pick a center of the distribution, typically the mean, then measure how far each data point is from the center. If the individual observations vary greatly from the group mean, the variance is big; and vice versa. Here, it is important to distinguish between the variance of a population and the variance of a sample. They have different notations, and they are computed differently. The variance of a population is denoted by σ2; and the variance of a sample by s2. The variance of a population is defined by the following formula: σ2 ¼ X ðXi � X Þ2 N ; ð3:2Þ where σ2 is the population variance, X is the population mean, Xi is the ith element from the population, and N is the number of elements in the population. The variance of a sample is defined by a slightly different formula: s2 ¼ X ðxi � xÞ2 ðn� 1Þ ; ð3:3Þ where s2 is the sample variance, x is the sample mean, xi is the ith element from the sample, and n is the number of elements in the sample. Using this formula, the variance of the sample is an unbiased estimate of the variance of the population. Example: In the Productivity dataset given in Table 3.1, we find by applying the formula in Equation 3.3 that the variance of the Productivity attribute can be calculated as 11.93 (approxi- mated to two decimal places), and the variance of the Experience can be calculated as 36. Figure 3.10 A snapshot from Google Sheets showing how to compute the standard deviation. 80 Techniques Standard Deviation. There is one issue with the variance as a measure. It gives us the measure of spread in units squared. So, for example, if we measure the variance of age (measured in years) of all the students in a class, the measure we will get will be in years2. However, practically, it would make more sense if we got the measure in years (not years squared). For this reason, we often take the square root of the variance, which ensures the measure of average spread is in the same units as the original measure. This measure is known as the standard deviation (see Figure 3.10). The formula to compute the standard deviation of a sample is s ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiX ðxi � xÞ2 ðn� 1Þ s : ð3:4Þ FYI: Comparing Distributions and Hypothesis Testing Often there is a need to compare different distributions to derive some important insights or make decisions. For instance, we want to see if our new strategy for marketing is changing our customers’ spending behaviors from the last month to this month. Let us assume that we have data about each customer’s spending amounts for both of these months. Using that data, we can plot a histogram per month that shows on the x-axis the number of customers and on the y-axis the amount they spent in that month. Now the question is: Are these two plots different enough to say that the new marketing strategy is effective? This is not something that can be easily answered by visual inspection. For this, there are several statistical tests that one could run that compare the two distributions and tell us if theyare different. Normally, for this, we begin by stating our hypotheses. A hypothesis is a way to state our assumption or belief that could be tested. The default knowledge or assumption could be stated as a null hypothesis and the opposite of that is called the alternative hypothesis. So, in this case, our null hypothesis could say that there is no difference between the two distributions, and the alternative hypothesis will state that there is indeed a difference. Next, we run one (or several) of those statistical tests. Almost any statistical package that you use will have in-built functions or packages to run such tests. Often, they are very easy to do. Typically, the result of a test would be some score and more importantly a confidence or probability value (frequently referred to as p-value) that indicates how much we believe the two distributions are the same. If this value is very small (typically less than 0.05 or 5%), we can reject the null hypothesis (that the distributions are the same) and accept the alternative hypothesis (that they are not). And that gives us our conclusion. In short, if we run a statistical test and the p-value comes out less than or equal to 0.05, we can conclude that the new marketing strategy did indeed have the effect (considering that no other variables were changed). Try It Yourself 3.5: Standard Deviation For this exercise, use the ANAEROB dataset that is available for download from OA 3.3. The dataset has 53 observations (numbers) of oxygen uptake and expired ventilation. Use this data to calculate the standard deviation for both attributes individually. 81 3.3 Descriptive Analysis 3.4 Diagnostic Analytics Diagnostic analytics are used for discovery, or to determine why something happened. Sometimes this type of analytics when done hands-on with a small dataset is also known as causal analysis, since it involves at least one cause (usually more than one) and one effect. This allows a look at past performance to determine what happened and why. The result of the analysis is often referred to as an analytic dashboard. For example, for a social media marketing campaign, you can use descriptive analytics to assess the number of posts, mentions, followers, fans, page views, reviews, or pins, etc. There can be thousands of online mentions that can be distilled into a single view to see what worked and what did not work in your past campaigns. There are various types of techniques available for diagnostic or causal analytics. Among them, one of the most frequently used is correlation. 3.4.1 Correlations Correlation is a statistical analysis that is used to measure and describe the strength and direction of the relationship between two variables. Strength indicates how closely two variables are related to each other, and direction indicates how one variable would change its value as the value of the other variable changes. Correlation is a simple statistical measure that examines how two variables change together over time. Take, for example, “umbrella” and “rain.” If someone who grew up in a place where it never rained saw rain for the first time, this person would observe that, whenever it rains, people use umbrellas. They may also notice that, on dry days, folks do not carry umbrellas. By definition, “rain” and “umbrella” are said to be correlated! More specifically, this relationship is strong and positive. Think about this for a second. An important statistic, the Pearson’s r correlation, is widely used to measure the degree of the relationship between linear related variables. When examining the stock market, for example, the Pearson’s r correlation can measure the degree to which two commodities are related. The following formula is used to calculate the Pearson’s r correlation: r ¼ N X xy� X x X yffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi N X x2 � X x � �2� � N X y2 � X y � �2� �s ; ð3:5Þ where r = Pearson’s r correlation coefficient, N = number of values in each dataset, ∑xy = sum of the products of paired scores, ∑x = sum of x scores, ∑y = sum of y scores, 82 Techniques ∑x2 = sum of squared x scores, and ∑y2 = sum of squared y scores.6 Hands-On Example 3.4: Correlation Let us use the formula in Equation 3.5 and calculate Pearson’s r correlation coefficient for the height– weight pair with the data provided in Table 3.3. First we will calculate various quantities needed for solving Pearson’s r correlation formula: N = number of values in each dataset = 10 ∑ xy = sum of the products of paired scores = 98,335.30 ∑ x = sum of x scores = 670.70 ∑ y = sum of y scores = 1463 ∑ x2 = sum of squared x scores = 45,058.21 ∑ y2 = sum of squared y scores = 218,015 Plugging these into the Pearson’s r correlation formula gives us 0.39 (approximated to two decimal places) as the correlation coefficient. This indicates two things: (1) “height” and “weight” are positively related, which means that, as one goes up, so does the other; and (2) the strength of their relation is medium. Try It Yourself 3.6: Correlation For this exercise, you are going to use the nasal data of male gray kangaroos (Australian Journal of Zoology, 28, 607–613). The dataset can be downloaded from OA 3.4. It has two attributes. In each pair, the X represents the nasal length in 10 mm, whereas the corresponding Y represents nasal width. Use this dataset to test the correlation between X and Y. Table 3.3 Height–weight data. Height Weight 64.5 118 73.3 143 68.8 172 65 147 69 146 64.5 138 66 175 66.3 134 68.8 172 64.5 118 83 3.4 Diagnostic Analytics 3.5 Predictive Analytics As you may have guessed, predictive analytics has its roots in our ability to predict what might happen. These analytics are about understanding the future using the data and the trends we have seen in the past, as well as emerging new contexts and processes. An example is trying to predict how people will spend their tax refunds based on how consumers normally behave around a given time of the year (past data and trends), and how a new tax policy (new context) may affect people’s refunds. Predictive analytics provides companies with actionable insights based on data. Such information includes estimates about the likelihood of a future outcome. It is important to remember that no statistical algorithm can “predict” the future with 100% certainty because the foundation of predictive analytics is based on probabilities. Companies use these statistics to forecast what might happen. Some of the software most commonly used by data science professionals for predictive analytics are SAS predictive analytics, IBM predictive analytics, RapidMiner, and others. As Figure 3.11 suggests, predictive analytics is done in stages. 1. First, once the data collection is complete, it needs to go through the process of cleaning (refer to Chapter 2 on data). 2. Cleaned data can help us obtain hindsight in relationships between different variables. Plotting the data (e.g., on a scatterplot) is a good place to look for hindsight. 3. Next, we need to confirm the existence of such relationships in the data. This is where regression comes into play. From the regression equation, we can confirm the pattern of distribution inside the data. In other words, we obtain insight from hindsight. 4. Finally, based on the identified patterns, or insight, we can predict the future, i.e., foresight. The following example illustrates a use for predictive analytics.8 Let us assume that Salesforce kept campaign data for the last eight quarters. This data comprises total sales generated by newspaper, TV, and online ad campaigns and associated expenditures, as provided in Table 3.4. Collect data Clean data Identifiy patterns Make predictions hindsight insight foresight Figure 3.11 Process of predictiveanalytics.7 84 Techniques With this data, we can predict the sales based on the expenditures of ad campaigns in different media for Salesforce. Like data analytics, predictive analytics has a number of common applications. For example, many people turn to predictive analytics to produce their credit scores. Financial services use such numbers to determine the probability that a customer will make their credit payments on time. FICO, in particular, has extensively used predictive analytics to develop the methodology to calculate individual FICO scores.9 Customer relationship management (CRM) classifies another common area for predic- tive analytics. Here, the process contributes to objectives such as marketing campaigns, sales, and customer service. Predictive analytics applications are also used in the healthcare field. They can determine which patients are at risk for developing certain conditions such as diabetes, asthma, and other chronic or serious illnesses. 3.6 Prescriptive Analytics Prescriptive analytics10 is the area of business analytics dedicated to finding the best course of action for a given situation. This may start by first analyzing the situation (using descriptive analysis), but then moves toward finding connections among various para- meters/variables, and their relation to each other to address a specific problem, more likely that of prediction. A process-intensive task, the prescriptive approach analyzes potential decisions, the interactions between decisions, the influences that bear upon these decisions, and the bearing all of this has on an outcome to ultimately prescribe an optimal course of action in real time.11 Prescriptive analytics can also suggest options for taking advantage of a future opportu- nity or mitigate a future risk and illustrate the implications of each. In practice, prescriptive analytics can continually and automatically process new data to improve the accuracy of predictions and provide advantageous decision options. Table 3.4 Data for doing predictive analytics. Serial Sales Newspaper TV Online 1 16,850 1000 500 1500 2 12,010 500 500 500 3 14,740 2000 500 500 4 13,890 1000 1000 1000 5 12,950 1000 500 500 6 15,640 500 1000 1000 7 14,960 1000 1000 1000 8 13,630 500 1500 500 85 3.6 Prescriptive Analytics Specific techniques used in prescriptive analytics include optimization, simulation, game theory,12 and decision-analysis methods. Prescriptive analytics can be really valuable in deriving insights from given data, but it is largely not used.13 According to Gartner,14 13% of organizations are using predictive analytics, but only 3% are using prescriptive analytics. Where big data analytics in general sheds light on a subject, prescriptive analytics gives you laser-like focus to answer specific questions. For example, in healthcare, we can better manage the patient population by using prescriptive analytics to measure the number of patients who are clinically obese, then add filters for factors like diabetes and LDL cholesterol levels to determine where to focus treatment. There are two more categories of data analysis techniques that are different from the above-mentioned four categories – exploratory analysis and mechanistic analysis. 3.7 Exploratory Analysis Often when working with data, we may not have a clear understanding of the problem or the situation. And yet, we may be called on to provide some insights. In other words, we are asked to provide an answer without knowing the question! This is where we go for an exploration. Exploratory analysis is an approach to analyzing datasets to find previously unknown relationships. Often such analysis involves using various data visualization approaches. Yes, sometimes seeing is believing! But more important, when we lack a clear question or a hypothesis, plotting the data in different forms could provide us with some clues regarding what we may find or want to find in the data. Such insights can then be useful for defining future studies/questions, leading to other forms of analysis. Usually not the definitive answer to the question at hand but only the start, exploratory analysis should not be used alone for generalizing and/or making predictions from the data. Exploratory data analysis is an approach that postpones the usual assumptions about what kind of model the data follows with the more direct approach of allowing the data itself to reveal its underlying structure in the form of a model. Thus, exploratory analysis is not a mere collection of techniques; rather, it offers a philosophy as to how to dissect a dataset; what to look for; how to look; and how to interpret the outcomes. As exploratory analysis consists of a range of techniques; its application is varied as well. However, the most common application is looking for patterns in the data, such as finding groups of similar genes from a collection of samples.15 Let us consider the US census data available from the US census website.16 This data has dozens of variables; we have already seen some of them in Figures 3.1 and 3.7. If you are 86 Techniques looking for something specific (e.g., which State has the highest population), you could go with descriptive analysis. If you are trying to predict something (e.g., which city will have the lowest influx of immigrant population), you could use prescriptive or predictive analysis. But, if someone gave you this data and asks you to find interesting insights, then what do you do? You could still do descriptive or prescriptive analysis, but given that there are lots of variables with massive amounts of data, it may be futile to do all possible combinations of those variables. So, you need to go exploring. That couldmean a number of things. Remember, exploratory analysis is about the methodology or philosophy of doing the analysis, rather than a specific technique. Here, for instance, you could take a small sample (data and/or variables) from the entire dataset and plot some of the variables (bar chart, scatterplot). Perhaps you see something interesting. You could go ahead and organize some of the data points along one or two dimensions (variables) to see if you find any patterns. The list goes on. We are not going to see these approaches/techniques right here. Instead, you will encounter them (e.g., clustering, visualization, classification, etc.) in various parts of this book. 3.8 Mechanistic Analysis Mechanistic analysis involves understanding the exact changes in variables that lead to changes in other variables for individual objects. For instance, we may want to know how the number of free doughnuts per employee per day affects employee productivity. Perhaps by giving them one extra doughnut we gain a 5% productivity boost, but two extra dough- nuts could end up making them lazy (and diabetic)! More seriously, though, think about studying the effects of carbon emissions on bringing about the Earth’s climate change. Here, we are interested in seeing how the increased amount of CO2 in the atmosphere is causing the overall temperature to change. We now know that, in the last 150 years, the CO2 levels have gone from 280 parts per million to 400 parts per million.17 And in that time, the Earth has heated up by 1.53 degrees Fahrenheit (0.85 degrees Celsius).18 This is a clear sign of climate change, something that we all need to be concerned about, but I will leave it there for now. What I want to bring you back to thinking about is the kind of analysis we presented here – that of studying a relationship between two variables. Such relationships are often explored using regression. 3.8.1 Regression In statistical modeling, regression analysis is a process for estimating the relationships among variables. Given this definition, you may wonder how regression differs from correlation. The answer can be found in the limitations of correlation analysis. Correlation by itself does not provide any indication of how one variable can be predicted from another. Regressionprovides this crucial information. Beyond estimating a relationship, regression analysis is a way of predicting an outcome variable from one predictor variable (simple linear regression) or several predictor vari- ables (multiple linear regression). Linear regression, the most common form of regression 87 3.8 Mechanistic Analysis used in data analysis, assumes this relationship to be linear. In other words, the relationship of the predictor variable(s) and outcome variable can be expressed by a straight line. If the predictor variable is represented by x, and the outcome variable is represented by y, then the relationship can be expressed by the equation y ¼ β0 þ β1x; ð3:6Þ where β1 represents the slope of the x, and β0 is the intercept or error term for the equation. What linear regression does is estimate the values of β0 and β1 from a set of observed data points, where the values of x, and associated values of y, are provided. So, when a new or previously unobserved data point comes where the value of y is unknown, it can fit the values of x, β0, and β1 into the above equation to predict the value of y. From statistical analysis, it has been shown that the slope of the regression β1 can be expressed by the following equation: β1 ¼ r sdy sdx ; ð3:7Þ where r is the Pearson’s correlation coefficient, and sd represents the standard deviation of the respective variable as calculated from the observed set of data points. Next, the value of the error term can be calculated from the following formula: β0 ¼ y � β1x; ð3:8Þ where y and x represent the means of the y and x variables, respectively. (More on these equations can be found in later chapters.) Once you have these values calculated, it is possible to estimate the value of y from the value of x. Hands-On Example 3.5: Regression We use the attitude dataset in Table 3.5. The first variable, attitude, represents the amount of positive attitude of the students who have taken an examination, and the score represents the marks scored by the participants in the examination. Table 3.5 Attitude and score data. # Attitude Score 1 65 129 2 67 126 3 68 143 4 70 156 5 71 161 6 72 158 7 72 168 8 73 166 9 73 182 10 75 201 88 Techniques Here attitude is going to be the predictor variable, and what regression would be able to do is to estimate the value of score from attitude. As explained above, first let us calculate the value of the slope, β1. From the data, Pearson’s correlation coefficient r can be calculated as 0.94. The standard deviations of x (attitude) and y (score) are 3.10 and 22.80, respectively. Therefore, the value of the slope is β1 ¼ 0:94� 22:80 3:10 ¼ 6:91: Next, the calculation of the error term β0 requires the mean values of x and y. From the given dataset, y and x are derived as 159 and 70.6, respectively. Therefore, the value of β0 will be β0 ¼ 159� ð6:91� 70:6Þ ¼ �328:85 Now, say you have a new participant whose positive attitude before taking the examination is measured at 78. His score in the examination can be estimated at 210.13: y ¼ �328:85þ ð6:91� 78Þ ¼ 210:13: Regression analysis has a number of salient applications to data science and other statistical fields. In the business realm, for example, powerful linear regression can be used to generate insights on consumer behavior, which helps professionals understand business and factors related to profitability. It can also help a corporation understand how sensitive its sales are to advertising expenditures, or it can examine how a stock price is affected by changes in interest rates. Regression analysis may even be used to look to the future; an equation may forecast demand for a company’s products or predict stock behaviors.19 Try It Yourself 3.7: Regression Obtain the Container Crane Controller Data Set available from OA 3.5. A container crane is used to transport containers from one place to another. The difficulty of this task lies in the fact that the bridge crane is connected to the container by cables, causing an opening angle when the container is being transported. Interfering with the operation at high speeds due to oscillation that occurs at the end-point could cause accidents. Use regression analysis to predict the power from speed and angle. Summary In this chapter, we reviewed some of the techniques and approaches used for data science. As should be evident, a lot of this revolves around statistics. And there is no way we could even introduce all of the statistics in one chapter. Therefore, this chapter focused on providing broader strokes of what these approaches and analyses are, with a few concrete examples and 89 Summary applications. As we proceed, many of these broad strokes will become more precise. Another reason for skimping on the details here is our lack of knowledge (or assumption about) any specific programming tool. You will soon see that, while it is possible to have a theoretical understanding of statistical analysis, for a hands-on data science approach itmakesmore sense to actually do stuff and gain an understanding of such analysis. And so, in the next part of the book, we are going to cover a bunch of tools and, while doing so, we will come back to most of these techniques. Then, we will have a chance to really understand different kinds of analysis and analytics as we apply them to solve various data problems. Almost all the real-life data science-related problems use more than one category of the analysis techniques described above. The number and types of categories used for analysis can be an indicator of the quality of the analysis. For example, in social science-related problems: • A weak analysis will only tell a story or describe the topic. • A good analysis will go beyond a mere description by engaging in several of the types of analysis listed above, but it will be weak on sociological analysis, the future orientation, and the development of social policy. • An excellent analysis will engage in many of the types of analyses we have discussed and will demonstrate an aggressive sociological analysis which develops a clear future orienta- tion and offers social policy changes to address problems associated with the topic. There is no clear agreeable-to-all classification scheme available in the literature to categorize all the analysis techniques that are used by data science professionals. However, based on their application to various stages of data analysis, we categorized analysis techniques into certain classes. Each of these categories and their application were described – some at length and some less so – with an understanding that we will revisit them later when we are addressing various data problems. I hope that with this chapter you can see that familiarity with various statistical measures and techniques is an integral part of being a data scientist. Armed with this arsenal of tools, you can take your skills and make important discoveries for a number of people in a number of areas. FYI: Algorithmic Bias Bias is caused not only by the data we use, as we saw in the previous chapter, but also by the algorithms and the techniques we use. We see biases introduced by algorithms all around us. For instance, automated decision-making (ADM) systems run on algorithms and are present in processes that can affect whether one person gets a good credit score or another person gets parole. The systems making these predictions are based on assumptions that are programmed into algorithms. And what are assumptions? Well, these are human-created perceptions and preconceived notions. And since they are created by humans, they are prone to problems of any such creation; they could be false, faulty, or simply a form of prejudice. For example, a June 2017 study by Matthias Spielkamp (Spielkamp, M. (2017). Inspecting algorithms for bias. MIT Technology Review) showed that the stop-and-frisk practice that the New York City Police 90 Techniques Department used from 2004 to 2012 to temporarily detain,question, and search individuals on the street whom they deemed suspicious turned out to have been a gross miscalculation based on human bias. The actual data revealed that 88% of those stopped were not and did not become offenders. Moral of the story? Do not trust the data or the technique blindly; they may be perpetuating the inherent biases and prejudices we already have. Key Terms • Data analysis: This is a process that refers to hands-on data exploration and evaluation. Analysis looks backwards, providing marketers with a historical view of what has happened. Analytics, on the other hand, models the future or predicts a result. • Data analytics: This defines the science behind the analysis. The science means understanding the cognitive processes an analyst uses to understand pro- blems and explore data in meaningful ways. It is used to model the future or predict a result. • Nominal variable: The variable type is nominal when there is no natural order between the possible values that it stores, for example, colors. • Ordinal variable: If the possible values of a data type are from an ordered set, then the type is ordinal. For example, grades in a mark sheet. • Interval variable:A kind of variable that provides numerical storage and allows us to do additions and subtractions on them but not multiplications or divisions. Example: temperature. • Ratio variable: A kind of variable that provides numerical storage and allows us to do additions and subtractions, as well asmultiplications or divisions, on them. Example: weight. • Independent /predictor variable: A variable that is thought to be controlled or not affected by other variables. • Dependent /outcome /response variable: A variable that depends on other variables (most often other independent variables). • Mean:Mean is the average of continuous data found by the summation of the given data and dividing by the number of data entries. • Median: Median is the middle data point in any ordinal dataset. • Mode: Mode of a dataset is the value that occurs most frequently. • Normal distribution: A normal distribution is a type of distribution of data points in which, when ordered, most values cluster in the middle of the range and the rest of the values symmetrically taper off toward both extremes. • Correlation: This indicates how closely two variables are related and ranges from −1 (negatively related) to +1 (positively related). A correlation of 0 indicates no relation between the variables. 91 Key Terms • Regression: Regression is a measure of functional relationship between two or more correlated variables, in which typically the relation is used to estimate the value of outcome variable from the predictor(s). • Descriptive analysis: This is a method for quantitatively describing the main features of a collection of data. • Diagnostic analytics: Also known as causal analysis, it is used for discovery, or to determine why something happened. It often involves at least one cause (usually more than one) and one effect. • Predictive analytics: This involves understanding the future using the data and the trends we have seen in the past, as well as emerging new contexts and processes. • Prescriptive analytics: This is the area of business analytics dedicated to finding the best course of action for a given situation. • Exploratory analysis: This is an approach to analyzing datasets to find previously unknown relationships. Often such analysis involves using various data visualization approaches. • Mechanistic analysis: This involves understanding the exact changes in variables that lead to changes in other variables for individual objects. Conceptual Questions 1. How do data analysis and data analytics differ? 2. Name three measures of centrality and describe how they differ. 3. You are looking at data about tax refunds people get. Which measure of centrality would you use to describe this data? Why? 4. In this chapter we saw that the distribution of household income is a skewed distribution. Find two more examples of skewed distributions. 5. Describe how exploratory analysis differs from predictive analysis. 6. List two differences between correlation analysis and regression analysis. 7. What is a predictor variable? Hands-On Problems Problem 3.1 Imagine 10 years down the line, in a dark and gloomyworld, your data science career has failed to take off. Instead, you have settled for the much less glamorous job of a community librarian. 92 Techniques Now, to simplify the logistics, the library has decided to limit all future procurement of books either to hardback or to softback copies. The library also plans to convert all the existing books to one cover type later. Fortunately, to help you decide, the library has gathered a small sample of data that gives measurements on the volume, area (only the cover of the book), and weight of 15 existing books, some of which are softback (“Pb”) and the rest are hardback (“Hb”) copies. The dataset is shown in the table and can be obtained from OA 3.6. Volume Area Weight Cover 1 885 382 800 Hb 2 1016 468 950 Hb 3 1125 387 1050 Hb 4 239 371 350 Hb 5 701 371 750 Hb 6 641 367 600 Hb 7 1228 396 1075 Hb 8 412 257 250 Pb 9 953 300 700 Pb 10 929 301 650 Pb 11 1492 403 975 Pb 12 419 213 350 Pb 13 1010 432 950 Pb 14 595 262 425 Pb 15 1034 380 725 Pb The above table represents that the dataset has 15 instances of the following four attributes: • Volume: Book volumes in cubic centimeters • Area: Total area of the book in square centimeters • Weight: Book weights in grams • Cover: A factor with levels; Hb for hardback, and Pb for paperback Now use this dataset to decide which type of book you want to procure in the future. Here is how you are going to do it. Determine: a. The median of the book covers. b. The mean of the book weights. c. The variance in book volumes. Use the above values to decidewhich book cover types the library should opt for in the future. Problem 3.2 Following is a small dataset of list price vs. best price for a new GMC pickup truck in $1000s. You can obtain it from OA 3.7. The x represents the list price, whereas the y represents the best price values. 93 Hands-On Problems x y 12.4 11.2 14.3 12.5 14.5 12.7 14.9 13.1 16.1 14.1 16.9 14.8 16.5 14.4 15.4 13.4 17 14.9 17.9 15.6 18.8 16.4 20.3 17.7 22.4 19.6 19.4 16.9 15.5 14 16.7 14.6 17.3 15.1 18.4 16.1 19.2 16.8 17.4 15.2 19.5 17 19.7 17.2 21.2 18.6 Now, use this dataset to complete the following tasks: a. Determine the Pearson’s correlation coefficient between the list price and best price. b. Establish a linear regression relationship between list price and best price. c. Based on the relationship you found, determine the best price of a pickup whose list price is 25.2 in $1000s. Problem 3.3 The following is a fictional dataset on the number of visitors to Asbury Park, NJ, in hundreds a day, the number of tickets issued for parking violations, and the average temperate (in degrees Celsius) for the same day. Number of visitors (in hundreds a day) Number of parking tickets Average temperature 15.8 8 35 12.3 6 38 19.5 9 32 94 Techniques (cont.) Number of visitors (in hundreds a day) Number of parking tickets Average temperature 8.9 4 26 11.4 6 31 17.6 9 36 16.5 10 38 14.7 3 30 3.9 1 21 14.6 9 34 10.0 7 36 10.3 6 32 7.4 2 25 13.4 6 37 11.5 7 34 Now, use this dataset to complete the following tasks: a. Determine the relationship between number of visitors and number of parking tickets issued. b. Find out the regression coefficient between the temperature and number of visitors. c. Look for any possible relationship between the temperature of the day and number of parking tickets issued. Further Reading and Resources There are plenty of good (and some mediocre) books on statistics. If you want to develop your techniques in data science, I suggest you pick up a good statistics book at the level you need. A couple of such books are listedbelow. • Salkind, N. (2016). Statistics for People Who (Think They) Hate Statistics. Sage. •Krathwohl, D. R. (2009).Methods of Educational and Social Science Research: The Logic of Methods. Waveland Press. • Field, A., Miles, J., & Field, Z. (2012). Discovering Statistics Using R. Sage. • Avideo by IBM describing the progression from descriptive analytics, through predictive analytics to prescriptive analytics: https://www.youtube.com/watch?v=VtETirgVn9c Notes 1. Analysis vs. analytics: What’s the difference? Blog by Connie Hill: http://www.1to1media.com /data-analytics/analysis-vs-analytics-whats-difference 2. KDnuggets™: Interview: David Kasik, Boeing, on Data analysis vs. data analytics: http://www .kdnuggets.com/2015/02/interview-david-kasik-boeing-data-analytics.html 3. Population map showing US census data: https://www.census.gov/2010census/popmap/ 95 Hands-On Problems 4. Of course, we have not covered these yet. But have patience; we are getting there. 5. Income distribution fromUSCensus: https://www.census.gov/library/visualizations/2015/demo/ distribution-of-household-income-2014.html 6. Pearson correlation: http://www.statisticssolutions.com/correlation-pearson-kendall-spearman/ 7. Process of predictive analytics: http://www.amadeus.com/blog/07/04/5-examples-predictive- analytics-travel-industry/ 8. Use for predictive analytics: https://www.r-bloggers.com/predicting-marketing-campaign- with-r/ 9. Understanding predictive analytics: http://www.fico.com/en/predictive-analytics 10. A company called Ayata holds the trademark for the term “Prescriptive Analytics”. (Ayata is the Sanskrit word for future.) 11. Process of prescriptive analytics: http://searchcio.techtarget.com/definition/Prescriptive- analytics 12. Game theory: http://whatis.techtarget.com/definition/game-theory 13. Use of prescriptive analytics: http://www.ingrammicroadvisor.com/data-center/four-types-of- big-data-analytics-and-examples-of-their-use 14. Gartner predicts predictive analytics as next big business trend: http://www.enterpriseappstoday .com/business-intelligence/gartner-taps-predictive-analytics-as-next-big-business-intelligence- trend.html 15. Six types of analyses: https://datascientistinsights.com/2013/01/29/six-types-of-analyses-every- data-scientist-should-know/ 16. Census data from US government: https://www.census.gov/data.html. 17. Climate change causes: https://climate.nasa.gov/causes/ 18. Global temperature in the last 100 years: https://www2.ucar.edu/climate/faq/how-much-has- global-temperature-risen-last-100-years 19. How businesses use regression analysis statistics: http://www.dummies.com/education/math/ business-statistics/how-businesses-use-regression-analysis-statistics/ 96 Techniques PART II TOOLS FOR DATA SCIENCE This part includes chapters to introduce various tools and platforms such as UNIX (Chapter 4), Python (Chapter 5), R (Chapter 6), and MySQL (Chapter 7). It is important to keep in mind that, since this is not a programming or database book, the objective here is not to go systematically into various parts of these tools. Rather, we focus on learning the basics and the relevant aspects of these tools to be able to solve various data problems. These chapters therefore are organized around addressing various data-driven problems. In the chapters covering Python and R, we also introduce basic machine learning. Before beginning with this part, make sure you are comfortable with the basic terminol- ogy concerning data, information technology, and statistics. It is also important that you review our discussion on computational thinking from Chapter 1, especially if you have never done any programming before. In some respects, Chapter 5 (Python) and Chapter 6 (R) offer very similar content; they each start out by introducing the fundamentals of programming in their respective environment, show how basic statistical and data operations can be done, and then extend it by working on some machine learning problems. In other words, you could jump to Chapter 6 without going through Chapter 5 if you are not interested in Python. However, you will find that Chapter 5 provides more detailed discussions on some of the concepts in statistics and machine learning, and, since Chapter 6 has less of it, we do not have to repeat so much of the conceptual material. I should also note that, while there is introduction to applied machine learning for solving data problems in both the Python and R chapters, this is kept at the surface level, with not enough depth for someone who really wants to use machine leaning in data science. For that, you will need to move to the next part of this book. Keep in mind that when we do go deeper in machine learning in that part, we will be using only R, so make sure to go through Chapter 6 first, if you have not done R in the past. 97 4 UNIX “Torture the data, and it will confess to anything.” — Ronald Coase, Nobel Prize Laureate for Economics What do you need? • A basic understanding of operating systems. • Being able to install and configure software. What will you learn? • Basics of the UNIX environment. • Running commands, utilities, and operations in UNIX. • Using UNIX to solve small data problems without programming. 4.1 Introduction While there are many powerful programming languages that one could use for solving data science problems, people forget that one of the most powerful and simplest tools to use is right under their noses. And that is UNIX. The name may generate images of old-time hackers hacking away on monochrome terminals. Or, it may hearken the idea of UNIX as a mainframe system, taking up lots of space in some warehouse. But, while UNIX is indeed one of the oldest computing platforms, it is quite sophisticated and supremely capable of handling almost any kind of computational and data problem. In fact, in many respects, UNIX is leaps and bounds ahead of other operating systems; it can do things of which others can only dream! Alas, when people think of tools for data science or data analytics, UNIX does not come to mind. Most books on these topics do not cover UNIX. But I think this is a missed opportunity, as UNIX allows one to do many data science tasks, including data cleaning, filtering, organizing (sorting), and even visualization, often using no more than its built-in commands and utilities. That makes it appealing to people who have not mastered a programming language or a statistics tool. So, we are not going to pass up this wonderful opportunity. In this chapter, we will see some basics of working in the UNIX environment. This involves running commands, piping and redirecting outputs, and editing files.We will also see several shortcuts that make it easier and faster to work on UNIX. Ultimately, of course, our goal is not mastering UNIX, but 99 solving data-driven problems, and so we will see how UNIX is useful in solving many problems without writing any code. 4.2 Getting Access to UNIX UNIX is everywhere. Well, it is if you look for it. If you are on a Linux machine, you are on a UNIX platform. Open your console or a terminal and you are good to go (Figure 4.1). If you have aMac, you are working on a UNIX platform. Go to Applications > Utilities in your Finder window, and open the Terminal app. Awindow should open up and you should be at a command prompt (Figure 4.2). If you are on a PC, it is a little tricky, but there is hope. There are a few options that allow you to create a UNIX-like environment on a Windows machine. One such option is Cygwin, available for free.2 (See Figure 4.3; and see Appendix C for instructions to install and use it.) Finally, if you do not have a UNIX-like environment on your computer or do not want to install something like Cygwin, you can get a couple of basic utilities and connect to a UNIX server. This will be covered in the next section. For the rest of this chapter, I am assuming that you are connecting to a UNIX server (even if you are already on a Linux ora Mac platform) and working on the server. This server could be provided by your organization, school, or a Web hosting company. You could also look into free online UNIX server services.4 Another sure way to have access to a UNIX Figure 4.1 Console window on a Linux KDE desktop.1 100 UNIX Figure 4.2 Terminal app on a Mac. Figure 4.3 Cygwin running on a Windows desktop.3 101 4.2 Getting Access to UNIX server is through a cloud service such as Amazon Web Services (AWS) or Google Cloud. See the FYI box below and Appendix F for more details on them. No matter what UNIX environment you end up using (whether you have a Linux or Mac machine, install Cygwin on a Windows PC, or connect to a UNIX server remotely), all that we are trying to do here (running commands, processing files, etc.) should work just the same. FYI: There Are More Ways to Access UNIX If you have never worked with anything like UNIX, it is important to take a pause at this point and really think about what UNIX is and, specifically, what we really need from it. While UNIX is a wonderful, powerful, networked operating system, we are not interested in that part of it. We simply want to get access to one of its shells (think about it as a cover or interface). This shell could give us access to the myriads of apps and utilities that UNIX has. Of course, this shell is covering the “real” UNIX, so while we are interested in the shell only, we have to find the “inner” part too. There are several ways to do this, but since we are not interested in actual UNIX so much, let us just do the one that is the easiest. Start with your local machine. Does your computer have some form of UNIX? If you are on a Linux or a Mac, the answer is “yes.” If you are on a Windows machine, then ask: Will it be easier for you to install a few pieces of software and configure them or find an external UNIX machine to connect to? If it is the latter, you can typically find such a machine (often called “server”) at your education institute or your company. But what if you are not in any school or do not have such a server at work? You can rent a server! That is right. Well, technically, it is called cloud services, but the idea is the same. You can go to Google, Amazon, or Microsoft and ask to create a virtual machine for you. Once created, you can log on to it just like you would to a physical server. Check out Appendix F, which takes you through the steps to set this up. If you are using UNIX just to get through this chapter, then look for the easiest solution. If you want to dive into it further, look for a more stable solution, including one of the cloud-based services. Do not worry; most services have a free tier. 4.3 Connecting to a UNIX Server If you have access to a UNIX, Linux, or Mac computer, you are in luck. Because then all you need are a couple of freely available tools on your machine. Here, that is what we are going to do. The two things you need are a way to connect to the server, and a way to transfer files between your machine and that server. 4.3.1 SSH Since we are going with the assumption that you are first connecting to a UNIX server before doing any of the UNIX operations, we need to learn how to connect to such a server. 102 UNIX The plain vanilla method is the use of Telnet service, but since that plain vanilla Telnet is often insecure, many UNIX servers do not support that connection. Instead, we will use SSH, which stands for “secure shell.” This essentially refers to two parts: the server part and the client part. The former is something we are not going to worry about because, if we have access to a UNIX server, that server will have the necessary server part of SSH. What we need to figure out is the client part – a tool or a utility that we will run on our computers. To connect a UNIX server using SSH, you need to be running some kind of shell (a program that provides you a command-line interface to your operating system) with SSH client service on your own machine. Again, if you have a Linux or a Mac, all you have to do is open your terminal or console. On the other hand, if you are using a PC, you need software that has SSH client service. A couple of (free, of course) software options are WinSCP5 and PuTTY.6 (You can find instructions for using them at WinSCP and Using PuTTY in Windows,7 respectively.) Whichever option you choose, you will need three pieces of information: hostname, username, and password. Figure 4.4 shows what it looks like with PuTTY. Hostname is the full name or IP (Internet Protocol) address of the server. The name could be something like example.organization.com and the IP address could be something like Figure 4.4 A screenshot of PuTTY. 103 4.3 Connecting to a UNIX Server 192.168.2.1. The username and password are those related to your account on that server. You will have to contact the administrator of the server you are hoping to connect to in order to obtain these pieces of information. If you are already on a UNIX system like a Linux or a Mac, run (type and hit “enter”) the following command in your terminal.8 ssh username@hostname If you are on a PC and are using one of the software options mentioned earlier (PuTTY, WinSCP), open that tool and enter information about host or server name (or IP address), username, and password in appropriate boxes and hit “Connect” (or equivalent). Once successfully connected, you should get a command prompt. You are now (vir- tually) on the server. Refer to the screenshot in Figure 4.5 for an example of what you may see. Note that when you get a prompt to enter your password, you are not going to see anything you type – not even “*”. So just type your password and hit “enter.” 4.3.2 FTP/SCP/SFTP Another important reason for connecting to the server is to transfer files between the client (your machine) and the server. Again, we have two options – non-secure FTP (File Transfer Protocol), or secure SCP (Secure Copy) or SFTP (Secure FTP). If you are on a Linux or a Mac, you can use any of these utilities through your command line or console/shell/terminal. But, unless you are comfortable with UNIX paths and systems, you could become lost and confused. Figure 4.5 Example of getting connected to a UNIX server using SSH. 104 UNIX So, we will use more intuitive file transfer packages. FileZilla happens to be a good one (free and easy to use) that is available for all platforms, but I am sure you can search online and find one that you prefer. In the end, they all offer similar functionalities. Whatever tool you have, you will once again need those three pieces of information: hostname, username, and password. Refer to the screenshot in Figure 4.6 from the FileZilla project site to give you an idea of what you might see. Here, the connection information that you need to enter are at the top (“Host,” “Username,” “Password,” and “Port”). Leave “Port” empty unless you have specific instructions about it from your system administrator. Figure 4.7 offers another example – this time from a different FTP tool, but as you can see, you need to enter the same kind of information: the server name (hostname), your username, and password. Go ahead and enter those details and connect to the server. Once connected, you will be in the home directory on the server. Most file transfer software applications provide a two-pane view, where one pane shows your local machine and the other shows the server. Transferring files then becomes an easy drag-and-drop operation. Figure 4.6 File transfer using FileZilla on Windows. 105 4.3 Connecting to a UNIX Server 4.4 Basic Commands In this section, we will see a few common commands. Try out as many of them as possible and be aware of the rest. They could save you a lot of time and trouble. I will assume that you are connected to a UNIX server using SSH. Alternatively, you can have a Cygwin environment installed (on aWindows PC), or you could work on a Linux or aMac machine. If you are using either a Linuxor a Mac (not connected to a server), go ahead and open a terminal or a console. 4.4.1 File and Directory Manipulation Commands Let us look at some of the basic file- and directory-related commands you can use with UNIX. Each of these is only briefly explained here and perhaps some of themmay not make much sense until you really need them. But go ahead and try as much as you can for now and make a note of the others. In the examples below, whenever you see “filename”, you should enter the actual filename such as “test.txt”. 1. pwd: Present working directory. By default, when you log in to a UNIX server or open a terminal on your machine, you will be in your home directory. From here, you can move around to other directories using the “cd” command (listed below). If you ever get lost, or are not sure where you are, just enter “pwd”. The systemwill tell you the full path of where you are. 2. rm:Remove or delete a file (e.g., “rm filename”). Be careful. Deleting a file may get rid of that file permanently. So, if you are used to having a “Recycle Bin” or “Trash” icon on your machine from where you can recover deleted files, you might be in for an unpleasant surprise! 3. rmdir: Remove or delete a directory (e.g., “rmdir myfolder”). You need to make sure that the directory/folder you are trying to delete is empty. Otherwise the systemwill not let you delete it. Alternatively, you can say “rm -f myfolder”, where “-f” is for forcing the deletion. Figure 4.7 Connecting to an FTP server using Transmit app on Mac. 106 UNIX 4. cd: Change directory (e.g., “cd data” to move into “data” directory). Simply entering “cd”will bring you to your home directory. Entering a space and two dots or full points after “cd” (i.e., “cd ..”) will take you up a level. 5. ls: List the files in the current directory. If you want more details of the files, use the “-l” option (e.g., “ls -l”). 6. du: Disk usage. To find out how much space a directory is taking up, you can issue a “du” command, which will display space information in bytes. To see things in MB and GB, use the “-h” option (e.g., “du -h”). 7. wc: Reports file size in lines, words, and characters (e.g., “wc myfile.txt”). 8. cat: Types the file content on the terminal (e.g., “cat myfile.txt”). Be careful about which file type you use it with. If it is a binary file (something that contains other than simple text), you may not only get weird characters filling up your screen, but you may even get weird sounds and other things that start happening, including freezing up the machine. 9. more: To see more of a file. You can say something like “more filename”, which is like “cat filename”, but it pauses after displaying a screenful of the file. You can hit Enter or the space-bar to continue displaying the file. Once again, use this only with text files. 10. head: Print the first few lines of a file (e.g., “head filename”). If you want to see the top three lines, you can use “head -3 filename”. Should I repeat that this needs to be tried only with text files?! Figure 4.8 shows some of these commands running in a terminal window. Note that “keystone:data chirags$” is my command prompt. For you, it is going to be something different. Figure 4.8 Sample commands run on a terminal. 107 4.4 Basic Commands 4.4.2 Process-Related Commands While most operating systems hide what is going on behind the scene of a nice-looking interface, UNIX gives unprecedented access to not only viewing those background pro- cesses, but also manipulating them. Here we will list some of the basic commands you may find useful for understanding and interacting with various processes. 1. Ctrl+c: Stop an ongoing process. If you are ever stuck in running a process or a command that does not give your command prompt back, this is your best bet. You may have to press Ctrl+c multiple times. 2. Ctrl+d: Logout. Enter this on a command prompt and you will be kicked out of your session. This may even close your console window. 3. ps: Lists the processes that run through the current terminal. 4. ps aux: Lists the processes for everyone on the machine. This could be spooky since in a multiuser environment (multiple users logging into the same server) one could essentially see what others are doing! Of course, that also means someone else could spy on you as well. 5. ps aux | grep daffy: List of processes for user “daffy.” Since there are likely to be lots of processes going on in a server environment, and most of them are of no relevance to you, you can use this combination to filter out only those processes that are running under your username. We will soon revisit the “|” (pipe) character you see here. 6. top: Displays a list of top processes in real time (Figure 4.9). This could be useful to see which processes are consuming considerable resources at the time. Note the PID column Figure 4.9 Output of “top” command. 108 UNIX at the left. This is where each process is reported with a unique process ID. You will need this in case you want to terminate that process. Press “q” to get out of this display. 7. kill: To kill or terminate a process. Usage: “kill -9 1234”. Here, “-9” indicates forced kill and “1234” is the process ID, which can be obtained from the second column of “ps” or the first column of “top” command outputs. 4.4.3 Other Useful Commands 1. man: Help (e.g., “man pwd”). Ever wanted to know more about using a command? Just use “man” (refers to manual pages). You may be surprised (and overwhelmed) to learn about all the possibilities that go with a command. 2. who: Find out who is logged in to the server. Yes, spooky! Try It Yourself 4.1: Basic UNIX Once connected to a UNIX server or a shell, answer the following questions using appropriate UNIX commands. 1. List the contents of the current directory and ensure there is no directory named “test”. 2. Create a new directory named “test”. 3. Move inside the new directory. Verify where you are by finding out the exact path of your current location. 4. Find out details of a UNIX command called “touch”. Learn how to use it to create a new file. 5. Create a new file using “touch” command. Verify it by listing the content in the current directory. 6. Delete the newly created file. Verify it is removed by listing the content in the current directory. 7. Move out of “test” directory. 8. Remove the “test” directory. 9. Logout using “Ctrl+d”. 4.4.4 Shortcuts Those who are intimidated by it probably do not know the fantastic shortcuts that UNIX offers. Here are some of them to make your life easier. 1. Auto-complete: Any time you are typing a command, a filename, or a path on the terminal, type part of it and hit the “tab” key. The system will either complete the rest of it or show you options. 2. Recall: UNIX saves a history of the commands you used. Simply pressing the up and down arrow keys on the terminal will bring them up in the order they were executed. 3. Search and recall: Do not want to go through pressing the up arrow so many times to find that command? Hit Ctrl+r and start typing part of that command. The system will 109 4.4 Basic Commands search through your command history. When you see what you were looking for, simply hit enter (and you may have to hit enter again). 4. Auto-path: Tired of typing the full path to some program or file you use frequently? Add it to your path by following these steps. 1. Go to your home directory on the server. 2. Open .bash_profile in an editor (notice the dot or full point before “bash_profile”). 3. Let us assume that the program you want to have direct access to is at /home/user/ daffy/MyProgram. Replace the line $PATH=$PATH:$HOME/bin with the following line: PATH=$PATH:$HOME/bin:/home/user/daffy/MyProgram 4. Save the file and exit the editor. 5. On the command line, run “. .bash_profile” (dot space dot bash_profile). This will set the path environment for your current session. For all future sessions, it will be set the moment you log in. Now, whenever you needto run /home/user/daffy/MyProgram, you can simply type “MyProgram”. And that is about it in terms of basic commands for UNIX. I know this could be a lot if you have never used UNIX/Linux before, but keep in mind that there is no need to memorize any of this. Instead, try practicing some of them and come back to others (or the same ones) later. In the end, nothing works better than practice. So, do not feel bad if these things do not sound intuitive – trust me, not all of them are! – or accessible enough at first. It may also help to practice these commands with a goal in mind. Look for a section later in this chapter where we see how these commands, processes, and their combinations can be used for solving data problems. But for now, let us move on to learn how to edit text files in a UNIX environment. 4.5 Editing on UNIX 4.5.1 The vi Editor One of the most basic and powerful editors on UNIX is called vi, short for “visual display.” I would not recommend it until you are comfortable with UNIX. But sometimes you may not have a choice – vi is omnipresent on UNIX. So even if some other editor may not be available, chances are, on a UNIX system, you will have vi. To edit or create a file using vi, type: vi filename at the command prompt. This will open up that file in the vi editor. If the file already exists, vi will load its content and now you can edit the file. If the file does not exist, you will have a blank editor. 110 UNIX Here is the tricky part. You cannot simply start typing to edit the file. You have to first enter the “insert” (editing) mode. To do so, simply press “i”. You will notice “- - INSERT--” appear at the bottom of your screen. Now you can start typing as youwould normally in any text editor. To save the file, you need to enter the command mode. Hit “esc” (the escape key at the top-left of your keyboard), then “:” (colon), and then “w”. You should see a message at the bottom of your screen that the file was saved. To exit, once again enter the command mode by pressing “esc”, then “:”, and finally “q” for quit. Figure 4.10 shows a screenshot of what editing with vi looks like. I know all of this may sound daunting if you have never used UNIX. But if you keep up with it, there are some tremendous benefits that only UNIX can provide. For instance, vi can run quite an effective search in your file using regular expressions (pattern matching in strings). 4.5.2 The Emacs Editor I would recommend using Emacs as an easy-to-use alternative to vi. On the terminal, enter “emacs file.txt” to edit or create the file.txt file. Start typing as you would normally. To save, press the combination of Ctrl+x and Ctrl+s (hold down Ctrl key and press “x” and then “s”). To quit, enter the combination Ctrl+x and Ctrl+c. See Figure 4.11 for an example of what Emacs looks like. Alternatively, you can create/edit a file on your computer and “FTP it” to the server. If you decide to do this, make sure you are creating a simple text file and not a Word doc or some other non-text format. Any time you want to read a file on the server, you can type “cat filename”. Figure 4.10 A vi editor screenshot. 111 4.5 Editing on UNIX Try It Yourself 4.2: Editing Start a new file in Emacs editor. Type in your name, address, phone number, and email. Save the file and exit out of Emacs. Use an appropriate UNIX command to print the content of this file on your terminal, as well as count the number of characters. Now open that file in vi editor. Delete your phone number. Save the file and exit vi. Use an appropriate UNIX command to print the content of this file on your terminal. Use an appropriate UNIX command to print the content of this file on your terminal, as well as count the number of characters. What is the difference you see in this output compared to before? Was there a reduction in the number of characters in this file? By how much? 4.6 Redirections and Piping Many programs and utilities (system-provided or user-created) can produce output, which is usually displayed on the console. However, if you like, you can redirect that output. For instance, the “ls” command lists all the files available in a given directory. If you want this listing stored in a file instead of displayed on the console, you can run “ls > output”. Here, “>” is a redirection operator and “output” is the name of the file where the output of the “ls” command will be stored. Figure 4.11 An Emacs editor screenshot. 112 UNIX Here we assumed that the “output” file does not already exist. If it does, its content will be overwritten by what “ls” generated. So be careful – check that you are not wiping out an existing file before redirecting the output to it. Sometimes you want to append new output to an existing file instead of overwriting it or creating a new file. For that, you can use operator “>>”. Example: “ls >> output”. Now, the new output will be added at the end of the “output” file. If the file does not exist, it will be created just like before. Redirection also works the other way. Let us take an example. We know that “wc -l xyz. txt” can count the number of lines in xyz.txt and display on the console; specifically, it lists the number of lines followed by the filename (here, xyz.txt). What if you want only the number of lines? You can redirect the file to “wc -l” command like this: “wc -l < xyz.txt”. Now you should see only a number. Let us extend this further. Imagine you want to store this number in another file (instead of displaying on the console). You can accomplish this by combining two redirection operators, like this: “wc -l < xyz.txt > output”. Now, a number will be calculated and it will be stored in a file named “output.” Go ahead and do “cat output” to read that file. Redirection is primarily done with files, but UNIX allows other ways to connect different commands or utilities. And that is done using pipes. Looking for a pipe symbol “|” on your keyboard? It is that character above the “\” character. Let us say you want to read a file. You can run “cat xyz.txt”. But it has too many lines and you care about only the first five. You can pipe the output of “cat xyz.txt” command to another command, “head -5”, which shows only the top five lines. And thus, the whole command becomes “cat xyz.txt | head -5”. Now imagine you want to see only the fifth line of that file. No problem. Pipe the above output to another command “tail -1”, which shows only the last (1) line of whatever is passed to it. So, the whole command becomes “cat xyz.txt | head -5 | tail -1”. And what if you want to store that one line to a file instead of just seeing it on the console? You guessed it – “cat xyz.txt | head -5 | tail -1 > output”. In the next section, we will see more examples of how redirections and pipes can be used for solving simple problems. Try It Yourself 4.3: Redirection and Piping For this exercise, use the file you created earlier for the last hands-on homework. First, use the appropriate UNIX command to print the number of lines in the file to the console. Next, use the redirection operator(s) that you just learned to add this number at the end of the same file. Finally, print the last line of the file in the console. If you have done this correctly, the first and the last step will print the same output in the console. 4.7 Solving Small Problems with UNIX UNIX provides an excellent environment for problem solving. We will not be able to go into every kind of problem and related details, but we will look at a few examples here. 113 4.7 Solving Small Problems with UNIX 1. Display Content of a File cat 1.txt 2. Combining Multiple Files in One File cat 1.txt 2.txt 3.txt > all.txt 3. Sorting a File Let us say we have a file numbers.txt with one number per line and we want to sort them. Just run: sort numbers.txt Want them sorted in descending order (reverse order)? Run: sort -r numbers.txt We can do the same with non-numbers. Let us create a file text.txt with “the quick brown fox jumpsover the lazy dog,” as text written one word per line. And now run the sorting command: sort text.txt to get those words sorted alphabetically. How about sorting multicolumn data? Say your file, test.txt, has three columns and you want to sort the dataset according to the values in the second column. Just run the following commands: sort -k2 test.txt 4. Finding Unique Tokens First, we need to make sure the words or tokens in the file are sorted, and then run the command for finding unique tokens. How do we do two things at the same time? Time to bring out the pipes: sort text.txt | uniq 5. Counting Unique Tokens And now, what if we want to find out how many unique tokens are in a file? Add another pipe: sort text.txt | uniq | wc –l 6. Searching for Text There are many ways one can search for text on UNIX. One option is “grep”. Let us search for the word “fox” in text.txt: grep 'fox' text.txt 114 UNIX If the word exists in that file, it will be printed on the console, otherwise the output will be nothing. But it does not end here. Let us say wewant to search for “fox” in all the text files in the current directory. That can be done using: grep 'fox' *.txt Here, “*.txt” indicates all the text files, with “*” being the wildcard. In the output, you can see all the .txt files that have the word “fox”. 7. Search and Replace Just like searching for text, there are several ways one can substitute text on UNIX. It often depends on where you are doing this search and replacement. If it is inside a text editor like vi or Emacs, you can use those editor-specific commands. But let us go with doing this on the console. We will use the “sed” command to replace “fox” with “sox” in our text.txt file and save it to text2.txt. sed 's/fox/sox/' text.txt > text2.txt Here, ‘s/fox/sox’ means search for “fox” and replace it with “sox”. Notice the use of redirection in this command. 8. Extracting Fields from Structured Data Let us first create a text file with a couple of fields for this experiment. Here is a file called “names.txt”: Bugs Bunny Daffy Duck Porky Pig Now, let us say we want to get everyone’s first name. We use the “cut” command like this: cut -d ' ' -f1 names.txt Here, the “-d” option is for specifying a delimiter, which is a space (see the option right after “-d”) and “-f1” indicates the first field. Let us create another file with phone numbers, called phones.txt: 123 456 7890 456 789 1230 789 123 4560 How do we get last four digits of all phone numbers from this file? cut -d ' ' -f3 9. More Text Operations If you want to perform more textual operations, you can look at the “fmt” command. It is a simple text formatter, often used for limiting the width of lines in a file. Paired with a - width flag (where width is a positive integer denoting the number of words to go on each 115 4.7 Solving Small Problems with UNIX output line and words are sequences of non-white-space characters), the “fmt” command can be used to display individual words in a sentence. For example: fmt -1 phones.txt Running the above command in the previous phone.txt data will print the following: 123 456 7890 456 789 1230 789 123 4560 If your dataset is too large you can use “head” to print the first 10 lines. So, the above line of code can be rewritten as: fmt -1 phones.txt | head 10. Merging Fields from Different Files Now let us combine our names and phone numbers. For this, we will use the “paste” command. This command takes at least two arguments.9 In this case, we will pass two arguments – names of the two files that we want to merge horizontally: paste names.txt phones.txt And voilà! Here is what you get: Bugs Bunny 123 456 7890 Daffy Duck 456 789 1230 Porky Pig 789 123 4560 And of course, you can store that output to a file using redirection, like this: paste names.txt phones.txt > phonebook.txt If you print out this file on the console using the “cat” command, you will be able to see the content of it, which will, of course, be the same as what you just saw above. 11. Arithmetic Operations UNIX can help you do small arithmetic operations without really writing any program. You just need to follow a few simple rules. For instance, we could easily assign values to variables “a” and “b” as follows. a = 10 b = 5 116 UNIX Now, let us add them. total=`expr $a + $b` echo $total Make sure you enter the above command exactly as it is shown here: those are back-ticks (on US keyboards, the character under the tilde character; yours may be elsewhere) and not single quotes, and there needs to be spaces around the “+” sign. This should print the result (15). Similarly, `expr $a \* $b` is for multiplication, `expr $b / $a` is for division, and `expr $b % $a` is for modulus operation. Try It Yourself 4.4: Small Data Problems 1. Execute a simple arithmetic operation Use UNIX commands to multiply 3 and 10, followed by divide by 2, and finally modulo 5. 2. Write the output to a file Write the output of all three steps in the last problem to a new file, numbers.txt in different lines. 3. Sorting a file Next, sort all the numbers in numbers.txt in ascending order. 4. Add more numbers to the end of a file Use UNIX commands to modify the file by adding the output of the last step to the end of the file. 5. Counting the number of unique numbers Count the number of unique numbers in the file. 6. Count the frequency of numbers Count the number of times each unique number appears in the same file. 7. Search and replace Using the UNIX commands find the largest number that appears in the file and the line number(s) in which it appears in the file. Add the text “Maximum: ” at the beginning of the line(s). For example, if the largest number is 10 and it appears at the third line, at the end of this step the same line should appear as following: Maximum: 10 Hands-On Example 4.1: Solving Data Problems with UNIX Let us take a data problem. We will work with housing.txt, which is a text file containing housing data. It is pretty big for a text file (53 MB) and available for download from OA 4.1. That size is easy to find. But can you find out how many records it has? This is a CSV file (field values separated using commas), so potentially you can load it into a spreadsheet program like Excel. OK, then go ahead and do it. It may take a while for that program to load this data. After that, see how quickly you can figure out the answer to that question. 117 4.7 Solving Small Problems with UNIX Alternatively, we can use UNIX. If you are already on a UNIX-based machine or have a UNIX-based environment, open a console and navigate to where this file is using “cd” commands. Or you can upload this file to a UNIX server using FTP or SFTP and then log in to the machine using SSH. Either way, I am assuming that you have a console or a terminal open to the place where this file is stored. Now simply issue: wc housing.txt That should give you an output like this: 64536 3256971 53281106 housing.txt The first number represents the number of lines, the second one refers to the number of words, and the third one reports the number of characters in this file. In other words, we can immediately see from this output that we have 64,535 records (one less because the first line has field names and not actual data). See how easy this was?! Next, can you figure out what fields this data has? It is easy using the “head” command on UNIX: head -1 housing.txt The output will list a whole bunch of fields, separated by commas. While some of them will not make sense, some are easy to figure out. For instance, you can see a column listing the age (AGE1), value (VALUE), number of rooms (ROOMS), and number of bedrooms (BEDRMS). Try some of the UNIX commands you know to explore this data. For instance, you can use “head” to see the first few lines and “tail” to see the last few lines. Now, let us ask: What is the maximum number of rooms in any house in this data? To answer this, we need to first extract the columnthat has information about the number of rooms (ROOMS) and then sort it in descending order. OK, so where is the ROOMS column? Using “head -1” command shown above, we can find the names of all the fields or columns. Here, we can see that ROOMS is the nineteenth field. To extract it, we will split the data into fields and ask for field #19. Here is how to do it: cut -d ',' -f19 housing.txt Here, the “cut” command allows us to split the data using delimiter (that is the -d option), which is a comma here. And finally, we are looking for field #19 (-f19). If you happened to run this command (oh, did I not tell you not to run it just yet?!), you perhaps saw a long list of numbers flying across your screen. That is simply the nineteenth field (all 64,535 values) being printed on your console. But we need to print it in a certain order. That means we can pipe with a “sort” command, like this: cut -d ',' -f19 housing.txt | sort -nr 118 UNIX Can you figure out what all those new things mean? Well, the first is the pipe “|” symbol. Here it allows us to channel the output of one command (here, “cut”) to another one (here, “sort”). Now we are looking at the “sort” command, where we indicate that we want to sort numerical values in descending or reverse order (thus, -nr). Once again, if you had no patience and went ahead to try this, you saw a list of numbers flying by, but in a different order. To make this output feasible to see, we can do one more piping – this time to “head”: cut -d ',' -f19 housing.txt | sort –nr | head Did you run it? Yes, this time I do want you to run this command. Now you can see only the first few values. And the largest value is 15, which means in our data the highest number of rooms any house has is 15. What if we also want to see how much these houses cost? Sure. For that, we need to extract field #15 as well. If we do that, here’s what the above command looks like: cut -d ',' -f15 -f19 housing.txt | sort -nr | head Here are the top few lines in the output: 2520000,15 2520000,14 2520000,13 2520000,13 2520000,13 Do you see what is happening here? The “sort” command is taking the whole thing like “2520000,15” for sorting. That is not what we really want. We want it to sort the data using only the number of rooms. To make that happen, we need to have “sort” do its own splitting of the data passed to it by the “cut” command and apply sorting to a specific field. Here is what that looks like: cut -d ',' -f15 -f19 housing.txt | sort -t ',' -k2 -nr | head Here we added the “-t” option to indicate we are going to split the data using a delimiter (“,” in this case), and once we do that, we will use the second field or key to apply sorting (thus, -k2). And now the output looks something like the following: 450000,15 2520000,15 990000,14 700000,14 600000,14 119 4.7 Solving Small Problems with UNIX Try It Yourself 4.5: Solving Data Problems with UNIX Continuing from the Hands-On Example 4.1, try changing “-k2” to “-k1”. How is the data being sorted now? Repeat these steps for number of bedrooms. In other words, extract and sort the data in descending order of number of bedrooms, displaying house value, total number of rooms, and, of course, number of bedrooms. Hopefully, now you have got a hang of how these things work. Go ahead and play around some more with this data, trying to answer some questions or satisfying your curiosity about this housing market data. You now know how to wield the awesome power of UNIX! FYI: When Should You Consider UNIX? If you know more than one programming languages or have familiarity with multiple programming environments, you know that you tend to develop preferences of which to use when. It is also not uncommon for people to use their favorite programming tool for solving a problem that it is not most suitable for. So, if you have not practiced on UNIX or Linux before, it would be hard to consider it as an option for a problem-solving need. Given that, it might help you if I describe when and how I personally consider UNIX. Most times I work on a Mac system, which makes it easy to bring up Terminal app. Mac, as you may know, is built on UNIX, and Terminal app can execute most commonplace UNIX commands. One need with textual data that I often have is counting tokens – characters, words, lines. As you can imagine, writing a program just to do that is just too much. What I have seen most people doing is opening up that text document in a word processing software, such as Microsoft Word, and using the in-built document statistics functionality to come up with the answer. This can easily take a minute or two. But with “wc” command, I can get the answer in two seconds. But that is not all. There are documents or files that are so large that I would not dare to open them in a software just so that I can get a line count. It may take a long time just to load it, and the program may even crash. But once again, a simple use of “wc” would yield the desired results in seconds without any side effects. Another common use I have for UNIX commands is when I am dealing with CSV files and need column extraction, sorting, or filtering done. Sure, I could use Excel, but that could take longer, especially if my data is large. And these are just a couple of simple cases of frequent uses of UNIX utilities; I have many other uses – some are more occasional than others. I admit that it all comes down to what you know and what you are willing to pay. But I hope this chapter has at least taken care of the former, that is, knowing enough UNIX. For the latter (amount of time and effort), give it some time. The more you practice, the better it gets. And before you know, you would be saving precious time and getting more productive with your data processing needs! 120 UNIX Summary UNIX is one of the most powerful platforms around, and the more you know about it and how to use it, the more amazing things you can do with your data, without even writing any code. People do not often think about using UNIX for data science, but as you can see in this chapter, we could get so much done with so little work. And we have only scratched the surface. We learned about how to have a UNIX-like environment on your machine or to connect to a UNIX server, as well as how to transfer files to/from that server. We tried a few basic and a few not-so-basic commands. But it is really the use of pipes and redirections that make these things pop out; this is where UNIX outshines anything else. Finally, we applied these basic skills to solve a couple of small data problems. It should be clear by now that those commands or programs on UNIX are highly efficient and effective. They could crunch through a large amount of data without ever choking! Earlier in this chapter I told you that UNIX can also help with data visualization. We avoided that topic here because creating visualizations (plots, etc.) will require a few installations and configurations on the UNIX server. Since that is a tall order and I cannot expect everyone to have access to such a server (and very friendly IT staff!), I decided to leave that part out of this chapter. Besides, soon we will see much better and easier ways to do data visualizations. Going further, I recommend learning about shell scripting or programming. This is UNIX’s own programming language that allows you to leverage the full potential of UNIX. Most shell scripts are small and yet very powerful. You will be amazed by the kinds of things you can have your operating system do for you! Key Terms • File: A file is collection of related data that appears to the user as a single, contiguous block of information, has a name, and is retained in storage. • Directory: A directory in an operating system (e.g., UNIX) is a special type of file that contains a list of objects (i.e., other files, directories, and links) and their corresponding details (e.g., when the file was created, last modified, file type, etc.) except the actual contentof those objects. • Protocols: The system of rules governing affairs of a process, for example, the FTP protocol defines the rules of file transfer process. • SSH (Secure Shell): This is an application or interface that allows one to either run UNIX commands or connect with a UNIX server. • FTP (File Transfer Protocol): This is an Internet protocol that allows one to connect two remote machines and transfer files between them. 121 Key Terms Conceptual Questions 1. What is a shell in the context of UNIX? 2. Name at least two operating systems that are based on UNIX. 3. What is the difference between a pipe and a redirection? Hands-On Problems Problem 4.1 You are given a portion of the data fromNYC about causes of death for some people in 2010 (available for download from OA 4.2). The data is in CSV format with the following fields: Year, Ethnicity, Sex, Cause of Death, Count, Percent. Answer the following questions using this data. Use UNIX commands or utilities. Show your work. Note that the answers to each of these questions should be the direct result of running appropriate commands/utilities and not involve any further processing, including manual work. Answers without the method to achieve them will not get any grade. a. How many male record groups and how many female record groups does the data have? b. How many white female groups are there? Copy entire records of them to a new file where the records are organized by death count in descending order. c. List causes of death by their frequencies in descending order. What are the three most frequent causes of death for males and females? Problem 4.2 Over the years, UNICEF has supported countries in collecting data related to children and women through an international household survey program. The survey responses on sanitation and hygiene from 70 countries in 2015 are constructed into a dataset on handwashing. You can download the CSV file from OA 4.3. Use the dataset to answer the following questions using UNIX commands or utilities: a. Which region has the lowest percentage of urban population? b. List the region(s) where the number of urban population in thousands are more than a million and yet they comprise less than half of the population. 122 UNIX Problem 4.3 For the following exercise, use the data on availability of essential medicines in 38 countries from the World Health Organization (WHO). You can find the dataset in OA 4.4. Download the filtered data as a CSV table and use it to answer the following questions. 1. Between 2007 and 2013, which country had the lowest percentage median availability of selected generic medicines in private. 2. List the top five countries which have the highest percentage of public and private median availability of selected medicines in 2007–2013. 3. List the top three countries where it is best to rely on the private availability of selected generic medicines than public. Explain your answer with valid reasons. Further Reading and Resources As noted earlier in this chapter, UNIX is often ignored as a tool/system/platform for solving data problems. That being said, there are a few good options for educating yourself about the potential of UNIX for data science that do a reasonable job of helping a beginner wield at least part of the awesome power that UNIX has. Data Science at the Command Line10 provides a nice list of basic commands and command-line utilities that one could use while working on data science tasks. The author, Jerosen Janssens, also has a book Data Science at Command Line, published by O’Reilly, which is worth consideration if you want to go further in UNIX. Similarly, Dr. Bunsen has a nice blog on explorations in UNIX available from http:// www.drbunsen.org/explorations-in-unix/. There are several cheat sheets that one could find on the Web containing UNIX commands and shortcuts. Some of them are here: • http://cheatsheetworld.com/programming/unix-linux-cheat-sheet/ • http://www.cheat-sheets.org/saved-copy/ubunturef.pdf • https://www.cheatography.com/davechild/cheat-sheets/regular-expressions/ Notes 1. Picture of console window on a Linux KDE console: https://www.flickr.com/photos/okubax/ 29814358851 2. Cygwin project: http://www.cygwin.com 3. Cygwin running on a Windows desktop: https://commons.wikimedia.org/wiki/File: Cygwin_X11_rootless_WinXP.png 4. Free UNIX: http://sdf.lonestar.org/index.cgi 5. WinSCP: http://winscp.net/eng/index.php 123 Hands-On Problems 6. Download PuTTY: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html 7. Using PuTTY in Windows: https://mediatemple.net/community/products/dv/204404604/using- ssh-in-putty- 8. It is not uncommon for students to literally type the instructions rather than interpreting it and substituting correct values. I have done a lot of “debugging” for such cases. So, do not literally type “ssh username@hostname”. You will substitute “username” with your actual username on that server, and “hostname”with the full address of the server. Save your instructor the debugging hassle! And be sure to make a note of these details where you can find them again later! 9. Arguments to a command or a program are the options or inputs it takes. No, that command is not trying to fight with you! 10. Data Science at the Command Line: http://datascienceatthecommandline.com/ 124 UNIX 5 Python “Most good programmers do programming not because they expect to get paid or get adulation by the public, but because it is fun to program.” — Linus Torvalds What do you need? • Computational thinking (refer to Chapter 1). • Ability to install and configure software. • Knowledge of basic statistics, including correlation and regression. • (Ideally) Prior exposure to any programming language. What will you learn? • Basic programming skills with Python. • Using Python to do statistical analysis, including producing models and visualizations. • Applying introductory machine learning techniques such as classification and clustering with Python to various data problems. 5.1 Introduction Python is a simple-to-use yet powerful scripting language that allows one to solve data problems of varying scale and complexity. It is also the most used tool in data science and most frequently listed in data science job postings as the requirement. Python is a very friendly and easy-to-learn language, making it ideal for the beginner. At the same time, it is very powerful and extensible, making it suitable for advanced data science needs. This chapter will start with an introduction to Python and then dive into using the language for addressing various data problems using statistical processing and machine learning. 5.2 Getting Access to Python One of the appeals of Python is that it is available for almost every platform you can imagine, and for free. In fact, in many cases – such as working on a UNIX or Linux machine – it is likely to be already installed for you. If not, it is very easy to obtain and install. 125 5.2.1 Download and Install Python For the purpose of this book, I will assume that you have access to Python. Not sure where it is? Try logging on to the server (using SSH) and run the command “python -version” at the terminal. This should print the version of Python installed on the server. It is also possible that you have Python installed on your machine. If you are on a Mac or a Linux, open a terminal or a console and run the same command as above to see if you have Python installed, and, if you do, what version. Finally, if you would like, you can install an appropriate version of Python for your system by downloading it directly from Python1 and following the installation and config- uration instructions. See this chapter’s further reading and resources for a link, and Appendices C and D for more help and details. 5.2.2 Running Python through Console Assuming you have access to Python – either on your own machine or on the server – let us now try something. On the console, first enter “python”to enter the Python environment. You should see a message and a prompt like this: Python 3.5.2 |Anaconda 4.2.0 (x86_64)| (default, Jul 2 2016, 17:52:12) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin Type “help”, “copyright”, “credits” or “license” for more infor- mation. >>> Now, at this prompt (the three “greater than” signs), write ‘print (“Hello, World!”)’ (without the single quotation marks) and hit enter. If things go right, you should see “Hello, World!” printed on the screen like the following: >>> print (”Hello, World!”) Hello, World! Let us now try a simple expression: 2+2. You see 4? Great! Finally, let us exit this prompt by entering exit(). If it is more convenient, you could also do Ctrl+d to exit the Python prompt. 5.2.3 Using Python through Integrated Development Environment (IDE) While running Python commands and small scripts are fine on the console, there are times when you need something more sophisticated. That is when an Integrated Development Environment (IDE) comes in. An IDE can let you not only write and run programs, but it can also provide help and documentation, as well as tools to debug, test, and deploy your programs – all in one place (thus, “integrated”). 126 Python There are several decent options for Python IDE, including using the Python plug-in for a general-purpose IDE, such as Eclipse. If you are familiar with and invested in Eclipse, you might get the Python plug-in for Eclipse and continue using Eclipse for your Python programming. Look at the footnote for PyDev.2 If you want to try something new, then look up Anaconda, Spyder, and IPython (more in Appendix D). Note that most beginners waste a lot of time trying to install and configure packages needed for running various Python programs. So, to make your life easier, I recommend using Anaconda as the platform and Spyder on top of it. There are three parts to getting this going. The good news is – you will have to do this only once. First, make sure you have Python installed on your machine. Download and install an appropriate version for your operating system from the Python3 link in the footnote. Next, download and install Anaconda Navigator4 from the link in the footnote. Once ready, go ahead and launch it. You will see something like Figure 5.1. Here, find a panel for “spyder.” In the screenshot, you can see a “Launch” button because I already have Spyder installed. For you, it may show “Install.” Go ahead and install Spyder through Anaconda Navigator. Once installed, that “Install” button in the Spyder panel should become “Launch.” Go ahead and launch Spyder. Figure 5.2 shows how it may look. Well, it is probably not going to have all the stuff that I have showing here, but you should see three distinct panes: one occupying the left half of the window and two on the right side. The left panel is where you Figure 5.1 A screenshot of Anaconda Navigator. 127 5.2 Getting Access to Python will type your code. The top-right panel, at the bottom, has Variable explorer, File explorer, as well as Help. The bottom-right panel is where you will see the output of your code. That is all for now in terms of setting things up. If you have made it thus far, you are ready to do real Python programming. The nice thing is, whenever we need extra packages or libraries for our work, we can go to Anaconda Navigator and install them through its nice IDE, rather than fidgeting with command-line utilities (many of my students have reported wasting hours doing that). Rather than doing any programming theory, we will learn basic Python using hands-on examples. 5.3 Basic Examples In this section, we will practice with a few basic elements of Python. If you have done any programming before, especially one that involves scripting, this should be easy to understand. The following screenshots are generated from an IPython (Jupyter) notebook. Refer to Appendix D if you want to learn more about this tool. Here, “In” lines show you what you enter, and “Out” lines show what you get in return. But it does not matter where you are typing your Python code – directly at the Python console, in Spyder console, or in some other Python tool – you should see the same outputs. Figure 5.2 A screenshot of Spyder IDE. 128 Python In the above segment, we began with a couple of commands that we tried when we first started up Python earlier in this chapter. Then, we did variable assignments. Entering “x = 2” defines variable “x” and assigns value “2” to it. In many traditional programming languages, doing this much could take up two to three steps as you have to declare what kind of variable you want to define (in this case, an integer – one that could hold whole numbers) before you could use it to assign values to it. Python makes it much simpler. Most times you would not have to worry about declaring data types for a variable. After assigning values to variables “x” and “y”, we performed a mathematical operation whenwe entered “z = x + y”. But we do not see the outcome of that operation until we enter “z”. This also should convey one more thing to you – generally speaking, when you want to know the value stored in a variable, you can simply enter that variable’s name on the Python prompt. Continuing on, let us see how we can use different arithmetic operators, followed by the use of logical operators, for comparing numerical quantities. 129 5.3 Basic Examples Here, first we entered a series of mathematical operations. As you can see, Python does not care if you put them all on a single line, separated by commas. It understands that each of them is a separate operation and provides you answers for each of them. Most programming languages use logical operators such as “>,” “<,” “>=,” and “<=.” Each of these should make sense, as they are exact representations of what we would use in regular math or logic studies. Where you may find a little surprise is how we represent comparison of two quantities (using “==”) and negation (using “!=”). Use of logical opera- tions results in Boolean values – “true” or “false,” or 1 or 0. You can see that in the above output: “2 > 3” is false and “3 >= 3” is true. Go ahead and try other operations like these. Python, like most other programming languages, offers a variety of data types. What is a data type? It is a format for storing data, including numbers and text. But to make things easier for you, often these data types are hidden. In other words, most times, we do not have to explicitly state what kind of data type a variable is going to store. As you can see above, we could use “type” operation or function around a variable name (e.g., “type(x)”) to find out its data type. Given that Python does not require you to explicitly 130 Python define a variable’s data type, it will make an appropriate decision based on what is being stored in a variable. So, when we tried storing the result of a division operation – x/y – into variable “z,” Python automatically decided to set z’s data type to be “float,” which is for storing real numbers such as 1.1, −3.5, and 22/7. Try It Yourself 5.1: Basic Operations Work on the following exercises using Python with any method you like (directly on the console, using Spyder, or using IPython notebook). 1. Perform the arithmetic operation, 182 modulo 13 and store the result in a variable, named “output.” 2. Print the value and data type of “output.” 3. Check if the value stored in “output” is equal to zero. 4. Repeat steps 1–3 with the arithmetic operation of 182 divided by 13. 5. Report if the data type of “output” is the same in both cases. 5.4 Control Structures Tomake decisions based onmeeting a condition (or two), we can use “if” statements. Let us say we want to find out if 2020 is a leap year. Here is the code: year = 2020 if (year%4 == 0): print (“Leap year”) else: print (“Not a leap year”) Here, the modulus operator (%) divides 2020 by 4 and gives us the remainder. If that remainder is 0, the script prints“Leap year,” otherwise we get “Not a leap year.” Now what if we have multiple conditions to check? Easy. Use a sequence of “if” and “elif” (short for “else if”). Here is the code that checks one variable (collegeYear), and, based on its value, declares the corresponding label for that year: collegeYear = 3 if (collegeYear == 1): print (“Freshman”) elif (collegeYear == 2): print (“Sophomore”) elif (collegeYear == 3): print (“Junior”) elif (collegeYear == 4): print (“Senior”) 131 5.4 Control Structures else: print (“Super-senior or not in college!”) Another form of control structure is a loop. There are two primary kinds of loops: “while” and “for”. The “while” loop allows us to do something until a condition is met. Take a simple case of printing the first five numbers: a, b = 1, 5 while (a<=b): print (a) a += 1 And here is how we could do the same with a “for” loop: for x in range(1, 6): print (x) Let us take another set of examples and see how these control structures work. As always, let us start with if–else. You probably have guessed the overall structure of the if– else block by now from the previous example. In case you have not, here it is: if condition1: statement(s) elif condition2: statement(s) else: statement(s) In the previous example, you saw a condition that involves numeric variables. Let us try one that involves character variables. Imagine in a multiple-choice questionnaire you are given four choices: A, B, C, and D. Among them, A and D are the correct choices and the rest are wrong. So, if you want to check if the answer chosen is the correct answer, the code can be as follows: if ans == ‘A’ or ans == ‘D’: print (“Correct answer”) else print (“Wrong answer”) Next, let us see if the same problem can be solved with the while loop: ans = input(‘Guess the right answer: ’) while (ans != ‘A’) and (ans != ‘D’): print (“Wrong answer”) ans = input(‘Guess the right answer: ’) The above code, will prompt the user to provide a new choice until the correct answer is provided. As evidenced from the two examples, the structure of the while loop can be viewed as: 132 Python while condition: statement(s) The statement(s) within the while loop are going to be executed repeatedly as long as the condition remains true. The same programming goal can be accomplished with the for loop as well: correctAns = [“A”, “D”] for ans in correctAns: print(ans) The above lines of code will print the correct choices for the question. Try It Yourself 5.2: Control Structures Pick any number within the range of 99 and 199 and check if the number is divisible by 7 using Python code. If the number is divisible, print the following: “the number is divisible by 7”; otherwise, print the number closest to the number you picked that is divisible by 7. Using a while loop, print all the numbers that are divisible by 7 within the same range. 5.5 Statistics Essentials In this section, we will see how some statistical elements can bemeasured andmanifested in Python. You are encouraged to learn basic statistics or brush up on those concepts using external resources (see Chapter 3 and Appendix C for some pointers). Let us start with a distribution of numbers. We can represent this distribution using an array, which is a collection of elements (in this case, numbers). For example, we are creating our family tree, and having put some data on the branches and leaves of this tree, we want to do some statistical analysis. Let us look at everyone’s age. Before doing any processing, we need to represent it as follows: data1=[85,62,78,64,25,12,74,96,63,45,78,20,5,30,45,78,45, 96,65,45,74,12,78,23,8] If you like, you can call this a dataset. We will use a very popular Python package or library called “numpy” to run our analyses. So, let us import and define this library: import numpy as np What did we just do? We asked Python to import a library called “numpy” and we said, internally (for the current session or program), that we will refer to that library as “np”. This particular library or package is extremely useful for us, as you will see. (Do not be surprised if many of your Python sessions or programs have this line somewhere in the beginning.) 133 5.5 Statistics Essentials Now, let us start asking (and answering) questions. 1. What is the largest (max) and the smallest (min) of these values? max=np.max(data1) print(“Max:{0:d}”.format(max)) min=np.min(data1) print(“Min:{0:d}”.format(min)) 2. What is the average age? This can be measured using mean. mean=np.mean(data1) print(“Mean:{0:8.4f}”.format(mean)) 3. How are age values spread across this distribution? We can use variance and standard deviation for this. variance=np.var(data1) print(“Variance:{0:8.4f}”.format(variance)) standarddev=np.std(data1) print(“STD:{0:8.4f}”.format(standarddev)) 4. What is the middle value of the age range? This is answered by finding the median. median=np.median(data1) print(“Median:{0:8.4f}”.format(median)) Finally, we can also plot the whole distribution (a histogram) using an appropriate library. Let us import it first: import matplotlib.pyplot as plt Once again, we are importing a package called “matplotlib.pyplot” and assigning a shortcut “plt” for the purpose of our current session. Now we run the following commands on our dataset: plt.figure() hist1, edges1 = np.histogram(data1) plt.bar(edges1[:-1], hist1, width=edges1[1:]-edges1[:-1]) Here, plt.figure() creates an environment for plotting a figure. Then, we get the data for creating a histogram using the second line. This data is passed to plt.bar() function, along with some parameters for the axes to produce the histogram we see in Figure 5.3. Note that if you get an error for plt.figure(), just ignore it and continue with the rest of the commands. It just might work! If we are too lazy to type in a whole bunch of values to create a dataset to play with, we could use the random number initialization function of numpy, like this: data2 = np.random.randn(1000) 134 Python Try It Yourself 5.3: Basic Statistics 1 Create an artificial dataset with 1000 random numbers. Run all of the analyses we did before with the new dataset. That means finding ranges, mean, and variance, as well as creating a visualization. If you did this exercise, you would notice that you get bars. But what if you wanted a different number of bars? This may be useful to control the resolution of the figure. Here, we have 1000 data points. So, on one extreme, we could ask for 1000 bars, but that may be too much. At the same time, we may not want to let Python decide for us. There is a quick fix. We can specify howmany of these bars, also called “bins,”we would like. For instance, if we wanted 100 bins or bars, we can write the following code. plt.figure() hist2, edges2 = np.histogram(data2, bins=100) plt.bar(edges2[:-1], hist2, width=edges2[1:]-edges2[:-1]) And the result is shown in Figure 5.4. Note that your plot may look a little different because your dataset may be different than mine. Why? Because we are getting these data points using a random number generator. In fact, you may see different plots every time you run your code starting with initializing data2! Try It Yourself 5.4: Basic Statistics 2 For this hands-on problem, you will need the Daily Demand Forecasting Orders dataset from the UCI machine learning repository,5 comprising 60 days of data from a Brazilian company of large logistics. The dataset has 13 attributes including 12 predictors and the target attribute, total of orders per day. Use this 5 4 3 2 1 0 0 20 40 60 80 100 Figure 5.3 Bar graph showing age distribution. 135 5.5 Statistics Essentials dataset to practice calculating the minimum, maximum, range, and average for all the attributes. Plot the data per attribute in a bar graph to visualize the distribution. We have gathered a few useful tools and techniques in the previous section. Let us apply them to a data problem, while also extending our reach with these tools.For this exercise, we will work with a small dataset available from github6 (see link in footnote). This is a macroscopic dataset with seven economic variables observed from the years 1947 to 1962 (n = 16). 5.5.1 Importing Data First, we need to import that data into our Python environment. For this, we will use Pandas library. Pandas is an important component of the Python scientific stack. The Pandas DataFrame is quite handy since it provides useful information, such as column names read from the data source so that the user can understand and manipulate the imported data more easily. Let us say that the data is in a file “data.csv” in the current directory. The following line loads that data in a variable “CSV_data.” from pandas import read_csv CSV_data = read_csv(‘data.csv’) Another way to use Pandas functionalities is the way we have worked with numpy. First, we import Pandas library and then call its appropriate functions like this: import pandas as pd df = pd.read_csv(‘data.csv’) This is especially useful if we need to use Pandas functionalities multiple times in the code. 40 35 30 25 20 15 10 5 0 –4 –3 –2 –1 0 1 2 3 Figure 5.4 Bar graph showing distribution of 1000 random numbers. 136 Python 5.5.2 Plotting the Data One of the nice things about Python, with the help of its libraries, is that it has very easy-to- use functionalities when it comes to visualizing the data. All we have to do is to import matplotlib.pyplot and use an appropriate function. Let us say we want to produce a scatterplot of “Employed” and “GNP” variables. Here is the code: import matplotlib.pyplot as plt plt.scatter(df.Employed, df.GNP) Figure 5.5 shows the result. It seems these two variables are somehow related. Let us explore it further by first finding the strength of their relation using a correlation function, and then performing a regression. FYI: Dataframes If you have used arrays in a programming language before, you should feel well at home with the idea of a dataframe in Python. A very popular way to implement a dataframe in Python is using Pandas library. We saw above how to use Pandas to import structured data in CSV format into a DataFrame kind of object. Once imported, you could visualize the DataFrame object in Spyder by double-clicking its name in Variable explorer. You will see that a dataframe is essentially a table or a matrix with rows and columns. And that is how you can access each of its data points. For instance, in a dataframe “df”, if you want the first row, first column element, you can ask for df.iat [0,0]. Alternatively, if the rows are labeled as “row-1”, “row-2”, . . . and the columns are labeled as “col- 1”, “col-2”, . . ., you can also ask for the same with df.at[‘row-1’,‘col-1’]. You see how such addressing makes it more readable? 600 550 500 450 400 350 T ot al E m pl oy m en t 300 250 200 60 62 64 72706866 Gross National Product 58 Figure 5.5 Scatterplot to visualize the relationship between GNP and Employment. 137 5.5 Statistics Essentials Do you ever need to save your dataframe to a CSV file? That is easy with df.to_csv(‘mydata.csv’) There is a lot more that you can do with dataframes, including adding rows and columns, and applying functions. But those are out of scope for us. If you are still interested, I suggest you consult some of the pointers at the end of this chapter. Before proceeding, note that while you can run these commands on your Spyder console and see immediate results, you may want to write them as a part of a program/script and run that program. To do this, type the code above in the editor (left panel) in Spyder, save it as a .py file, and click “Run file” (the “play” button) on the toolbar. 5.5.3 Correlation One of the most common tests we often need to do while solving data-driven problems is to see if two variables are related. For this, we can do a statistical test for correlation. Let us assume we have the previous data ready in dataframe df. And we want to find if the “Employed” field and “GNP” field are correlated. We could use the “corrcoef” function of numpy to find the correlation coefficient, which gives us an idea of the strength of correlation between these two variables. Here is that line of code: np.corrcoef(df.Employed,df.GNP)[0,1] The output of this statement tells us that there is very high correlation between these two variables as represented by the correlation coefficient = 0.9835. Also note that this number is positive, which means both variables move together in the same direction. If this correlation coefficient were negative, we would still have a strong correlation, but just in the opposite direction. In other words, in this case knowing one variable should give us enough knowledge about the other. Let us ask: If we know the value of one variable (independent variable or predictor), can we predict the value of the other variable (dependent variable or response)? For that, we need to perform regression analysis. 5.5.4 Linear Regression So, we can learn about two variables relating in some way, but if there is a relationship of some kind, can we figure out if and how one variable could predict the other? Linear regression allows us to do that. Specifically, we want to see how a variable X affects a variable y.7 Here, X is called the independent variable or predictor; and y is called the dependent variable or outcome. 138 Python Figuring out this relationship between X and y can be seen as building or fitting a model with linear regression. There are many methods for doing so, but perhaps the most common is ordinary least squares (OLS). For doing linear regression with Python, we can use statsmodels library’s API functions, as follows: import statsmodels.api as sm lr_model = sm.OLS(y, X).fit() Here, lr_model is the model built using linear regression with the OLS fitting approach. How do we know this worked? Let us check the results of the model by running the following command: print(lr_model.summary()) Somewhere in the output here, we can find values of coefficients – one for const (constant) and the other for GNP. And here is our regression equation: Employed = coeff*GNP + const Now just substitute a value for GNP, its coefficient (found from the output above), and constant. See if the corresponding value in Employedmatches with the data. There might be some difference, but hopefully not much! Let us look at an example. We know from the dataset that in 1960, GNP value was 502.601. We will use our regression equation to calculate the value of Employed. Plugging in the values GNP = 502.601, coeff = 0.0348, and const = 51.8436 in the above equation, we get: Employed = 0.0348*502.601 + 51.8436 = 69.334 Now let us look up the actual value of “Employed” for 1960. It is 69.564. That means we were off by only 0.23. That is not bad for our prediction. And, more important, we now have a model (the line equation) that could allow us also to interpolate and extrapolate. In other words, we could even plug in some unknown GNP value and find out what the approximate value for Employed would be. Well, why do not we do this more systematically? Specifically, let us come up with all kinds of values for our independent variable, find the corresponding values for the depen- dent variable using the above equation, and plot it on the scatterplot. We will see this as a part of a full example below. Hands-On Example 5.1: Linear Regression Here is the full code that shows all the things we have talked about in this section: from importing data to doing various statistical analysis and plotting. Note that anything that starts with “#” in Python is considered a comment and ignored (not run). At the end of the code is one of the plots with a regression line (Figure 5.6). The dataset (Longley.csv) is available for download from OA 5.1. 139 5.5 Statistics Essentials # Load the libraries we need – numpy, pandas, pyplot, and statsmodels.api import numpy as np import pandas as pd importmatplotlib.pyplot as plt import statsmodels.api as sm # Load the Longley dataset into a Pandas DataFrame – first column (year) used as row labels df = pd.read_csv(‘longley.csv’, index_col=0) # Find the correlation between Employed and GNP print(“Correlation coefficient = ”, np.corrcoef(df. Employed,df.GNP)[0,1]) # Prepare X and y for the regression model: y = y = df.Employed # response (dependent variable) X = df.GNP # predictor (independent variable) X = sm.add_constant(X) # Adds a constant term to the predictor # Build the regression model using OLS (ordinary least squares) lr_model = sm.OLS(y, X).fit() print(lr_model.summary()) # We pick 100 points equally spaced from the min to the max X_prime = np.linspace(X.GNP.min(), X.GNP.max(), 100) X_prime = sm.add_constant(X_prime) # Add a constant as we did before 72 70 68 66 T ot al E m pl oy m en t 64 62 60 58 200 250 300 350 400 450 Gross National Product 500 550 600 Figure 5.6 Scatterplot of GNP vs. Employed overlaid with a regression line. 140 Python # Now we calculate the predicted values y_hat = lr_model.predict(X_prime) plt.scatter(X.GNP, y) # Plot the raw data plt.xlabel(“Gross National Product”) plt.ylabel(“Total Employment”) # Add the regression line, colored in red plt.plot(X_prime[:, 1], y_hat, ‘red’, alpha=0.9) If you see something strange in your plots from the above code, chances are your plotting environments are getting messed up. To address that, change the code for your plotting as shown below: plt.figure(1) plt.subplot(211) plt.scatter(df.Employed, df.GNP) plt.subplot(212) plt.scatter(X.GNP, y) # Plot the raw data plt.xlabel(“Gross National Product”) plt.ylabel(“Total Employment”) # Add the regression line, colored in red plt.plot(X_prime[:, 1], y_hat, ‘red’, alpha=0.9) Essentially, we are creating separate spaces to display the original scatterplot, and the new scatterplot with the regression line. 5.5.5 Multiple Linear Regression What we have seen so far is one variable (predictor) helping to predict another (response or dependent variable). But there are many situations in life when there is not a single factor that contributes to an outcome. And so, we need to look at multiple factors or variables. That is when we use multiple linear regression. As the name suggests, this is a method that takes into account multiple predictors in order to predict one response or outcome variable. Let us take an example. Hands-On Example 5.2: Multiple Linear Regression We will start by getting a small dataset from OA 5.2. This dataset contains information about advertising budgets for TV and radio and corresponding sales numbers. What we want to learn here is how much those budgets influence product sales. Let us first load it up in our Python environment. # Load the libraries we need – numpy, pandas, pyplot, and statsmodels.api 141 5.5 Statistics Essentials import numpy as np import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm # Load the advertising dataset into a pandas dataframe df = pd.read_csv(‘Advertising.csv’, index_col=0) We start our analysis by doing linear regression, as we did before, to see how well we could use the “TV” variable to predict “Sales”. y = df.Sales X = df.TV X = sm.add_constant(X) lr_model = sm.OLS(y,X).fit() print(lr_model.summary()) print(lr_model.params) In this output, what we are looking for is the R-squared value. It is around 0.61, which means that about 61% of the variance in this TV–sales relationship can be explained using the model we built. Well, that is not too bad, but before we move on, let us plot this relationship: plt.figure() plt.scatter(df.TV,df.Sales) plt.xlabel(‘TV’) plt.ylabel(‘Sales’) The outcome is shown in Figure 5.7. 30 25 20 15 S al es 10 5 0 0–50 50 100 TV 150 200 250 300 350 Figure 5.7 Scatterplot of TV vs. Sales from the advertising data. 142 Python Let us repeat the process for radio. y = df.Sales X = df.Radio X = sm.add_constant(X) lr_model = sm.OLS(y,X).fit() print(lr_model.summary()) print(lr_model.params) plt.figure() plt.scatter(df.Radio,df.Sales) plt.xlabel(‘Radio’) plt.ylabel(‘Sales’) And Figure 5.8 is what we get as the result. This model gives us R-squared value around 0.33, which is even worse than what we got with TV. Now, let us see what happens if we put both of these independent variables (TV and radio) together to predict sales: y = df[‘Sales’] X = df[[‘TV’,‘Radio’]] X = sm.add_constant(X) lr_model = sm.OLS(y,X).fit() print(lr_model.summary()) print(lr_model.params) 30 25 10 S al es 0 0–10 10 20 30 Radio 40 50 60 5 15 20 Figure 5.8 Scatterplot of Radio vs. Sales from the advertising data. 143 5.5 Statistics Essentials This comes up with R-squared close to 90%. That is much better. Seems like two are better than one, and that is our multiple linear regression! And here is the code for plotting this regression in three dimensions (3D), with the result shown in Figure 5.9. Consider this as optional or extra stuff. from mpl_toolkits.mplot3d import Axes3D # Figure out X and Y axis using ranges from TV and Radio X_axis, Y_axis = np.meshgrid(np.linspace(X.TV.min(), X.TV. max(),100),np.linspace(X.Radio.min(),X.Radio.max(),100)) # Plot the hyperplane by calculating corresponding Z axis (Sales) Z_axis = lr_model.params[0] + lr_model.params[1] * X_axis + lr_model.params[2] * Y_axis # Create matplotlib 3D axes fig = plt.figure(figsize=(12, 8)) # figsize refers to width and height of the figure ax = Axes3D(fig, azim=-100) 30 25 20 15 10 5 0 60 50 40 30 20 10 0 –10 –50 0 50 100 150 TV R adio S ales 200 250 300 350 Figure 5.9 Three-dimensional scatterplot showing TV, Radio, and Sales variables. 144 Python # Plot hyperplane ax.plot_surface(X_axis, Y_axis, Z_axis, cmap=plt.cm.cool- warm, alpha=0.5, linewidth=0) # Plot data points ax.scatter(X.TV, X.Radio, y) # set axis labels ax.set_xlabel(‘TV’) ax.set_ylabel(‘Radio’) ax.set_zlabel(‘Sales’) Try It Yourself 5.5: Regression Let us practice what you have learned about correlation, regression, and visualization so far with a small dataset that you can obtain from OA 5.3. The All Greens Franchise dataset contains 30 observations about All Greens sales that has five predictor variables apart from the annual net sales figure. Use this dataset to: 1. determine the correlation of annual net sales with money spent on advertising and number of competitors in the area; 2. visualize the above correlation in a scatterplot; and 3. build a regression model to predict the annual net sales figure using the other five columns in the dataset. 5.6 Introduction to Machine Learning In a couple of chapters, we are going to see machine learning at its full glory (or at least the glory that we could achieve in this book!). But while we are on a roll with Python, it would be worthwhile to dip our toes in the waters of machine learning and see what sorts of data problems we could solve. We will start in the following subsection with a little introduction to machine learning, and then quickly move to recognizing some of the basic problems, techniques, and solu- tions. We will revisit most of these with more details and examples in Part III of this book. 5.6.1 What Is Machine Learning? Machine learning (ML) is a field of inquiry, an application area, and one of the most important skills that a data scientist can list on their résumé. It sits at the intersections of 145 5.6 Introduction to Machine Learning computer science and statistics, among other related areas like engineering. It is used in pretty much every area that deals with data processing, including business, bioinformatics, weather forecasting, and intelligence (the NSA and CIA kind!). Machine learning is about enabling computers and other artificial systems to learn without explicitly programming them. This is where we want such systems to see some data, learn from it, and then use that knowledge to infer things from other data. Whymachine learning? Because machine learning can help you turn your data into information; it can allow you to translate a seemingly boring bunch of data points into meaningful patterns that could help in critical decision-making; it lets you harness the true power of having a lot of data. We have already seen and done a form of machine learning when we tried predicting values based on learned relationship between a predictor and an outcome. The core of machine learning can be explained using the decision tree (which also happens to be the name of one of the ML algorithms!) shown in Figure 5.10. If we are trying to predict a value by learning (from data) how various predictors and the response variables relate, we are looking at supervised learning. Within that branch, if the response variable is continuous, the problem becomes that of regression, which we have already seen. Think about knowing someone’s age and occupation and predicting their income. If, on the other hand, the response variable is discrete (having a few possible values or labels), this becomes a classification problem. For instance, if you are using someone’s age and occupation to try to learn if they are a high-earner, medium-earner, or low-earner (three classes), you are doing classification. These learning problems require us to know the truth first. For example, in order to learn how age and occupation could tell us about one’s earning class, we need to know the true value of someone’s class – do they belong to high-earner, medium-earner, or low-earner. But there are times when the data given to us do not have clear labels or true values. And yet, we are tasked with exploring and explaining that data. In this case, we are dealingwithunsupervised learning Predict or forecast a value? Machine Learning Outline Unsupervised learning Yes No Supervised learning Clustering Density estimation RegressionClassification Discrete target value Continuous target value Fit data into discrete groups Numeric estimate to fit Figure 5.10 An outline of core machine learning problems. 146 Python problems.Within that, ifwewant to organize data into various groups,we encounter a clustering problem. This is similar to classification, but, unlike classification, we do not know how many classes there are and what they are called. On the other hand, if we are trying to explain the data by estimating underlying processes that may be responsible for how the data is distributed, this becomes a density estimation problem. In the following subsections, we will learn more about these branches, with, of course, hands-on exercises. Since we have already worked with regression, in this section we will focus on classi- fication, clustering, and density estimation branches. 5.6.2 Classification (Supervised Learning) The task of classification is this: Given a set of data points and their corresponding labels, learn how they are classified, so when a new data point comes, we can put it in the correct class. There are many methods and algorithms for building classifiers, one of which is k-nearest neighbor (kNN). Here is how kNN works: 1. As in the general problem of classification, we have a set of data points for which we know the correct class labels. 2. When we get a new data point, we compare it to each of our existing data points and find similarity. 3. Take the most similar k data points (k nearest neighbors). 4. From these k data points, take the majority vote of their labels. The winning label is the label or class of the new data point. Usually k is a small number between 2 and 20. As you can imagine, the more the number of nearest neighbors (value of k), the longer it takes us to do the processing. Finding similarity between data points is also something that is very important, but we are not going to discuss it here. For the time being, it is easier to visualize our data points on a plane (in two or three dimensions) and think about distance between them using our knowledge from linear algebra. Hands-On Example 5.3: Classification Let us take an example. We will use a wine dataset available from OA 5.4. This data contains information about various attributes of different wines and their corresponding quality. Specifically, the wine is classified as high quality or not. We will consider these as class labels and build a classifier that learns (based on other attributes) how a wine is classified in one of these two classes. We will start by importing different libraries we will need. Notice a new library sklearn, which comes from scikit-learn (http://scikit-learn.org/stable/index.html), a popular library for doing machine learning applications in Python. This example loads the data, trains a classifier on 70% of the total data, tests that classifier on the remaining 30% of the data, and calculates the accuracy of the classifier: 147 5.6 Introduction to Machine Learning import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.neighbors import KNeighborsClassifier from sklearn.cross_validation import train_test_split df = pd.read_csv(“wine.csv”) # Mark about 70% of the data for training and use the rest for testing # We will use ‘density’, ‘sulfates’, and ‘residual_sugar’ features # for training a classifier on ‘high_quality’ X_train, X_test, y_train, y_test = train_test_split(df [[‘density’,‘sulfates’,‘residual_sugar’]], df[‘high_qu- ality’], test_size=.3) classifier = KNeighborsClassifier(n_neighbors=3) classifier.fit(X_train, y_train) # Test the classifier by giving it test instances prediction = classifier.predict(X_test) # Count how many were correctly classified correct = np.where(prediction==y_test, 1, 0).sum() print (correct) # Calculate the accuracy of this classifier accuracy = correct/len(y_test) print (accuracy) Note that the above example uses k = 3 (checking on three nearest neighbors when doing comparison). The accuracy is around 76% (you will get a different number every time because a different set of data is used for training and testing every time you run the program). But what would happen if the value of k were different? Let us try building and testing the classifier using a range of values for k and plot the accuracy corresponding to each k. # Start with an array where the results (k and corresponding # accuracy) will be stored results = [] for k in range(1, 51, 2): classifier = KNeighborsClassifier(n_neighbors=k) classifier.fit(X_train, y_train) prediction = classifier.predict(X_test) accuracy = np.where(prediction==y_test, 1, 0).sum() / (len(y_test)) print (“k=”,k,“Accuracy=”, accuracy) 148 Python results.append([k, accuracy]) # Storing the k,accuracy tuple in results array # Convert that series of tuples in a dataframe for easy plotting results = pd.DataFrame(results, columns=[“k”, “accu- racy”]) plt.plot(results.k, results.accuracy) plt.title(“Value of k and corresponding classification accuracy”) plt.show() The plotting result is shown in Figure 5.11. Note that, again, every time you run this program, you will see slightly different results (and plot). In the output, you will also notice that after a certain value of k (typically 15), the improvements in accuracy are hardly noticeable. In other words, we reach the saturation point. Try It Yourself 5.6: Classification Let us try what you just learned about classification with another dataset. The wheat dataset in UCI’s repository (get it from OA 5.5) comprises data about kernels belonging to three different varieties of wheat: Kama, Rosa and Canadian. For each wheat variety, with a random sample of 70 elements, high- quality visualization of the internal kernel structure was detected using a soft X-ray technique. Seven geometric parameters of wheat kernels were measured. Use these measures to classify the wheat variety. 0.81 0.80 0.79 0.78 0.77 0.76 0.75 0.74 10 20 30 40 500 Value of k and corresponding classification accuracy Figure 5.11 Plot showing how different values of k affectthe accuracy of the kNN model built here. 149 5.6 Introduction to Machine Learning 5.6.3 Clustering (Unsupervised Learning) Nowwe will look at another branch of machine learning. In the example we just saw, we knew that we had two classes and the task was to assign a new data point to one of those existing class labels. But what if we do not know what those labels are or even howmany classes there are to begin with? That is when we apply unsupervised learning with the help of clustering. Hands-On Example 5.4: Clustering Let us take an example. As before, we will want to get some of our required libraries imported before getting to work: import numpy as np import matplotlib.pyplot as plt # Import style class from matplotlib and use that to apply ggplot styling from matplotlib import style style.use(“ggplot”) Now, we will take a bunch of data points and cluster them. How, you ask? Using a fantastic algorithm called k-means. In addition to being fantastic, k-means is a simple algorithm. Here is how it works. 1. First, we guess or determine the number of clusters (k) we want. If our data points are in n dimensions, the centers of these clusters, or centroids, will also be n-dimensional points. In other words, we will have a total of k n-dimensional points that we will call the centroids. Yes, these are essentially just some random points in that n-dimensional space. 2. Now, we assign each of the actual data points to one of these k centroids based on their distances to these centroids. After this step, each data point will be assigned to one of the k clusters. 3. Now, we recompute each cluster’s centroid. So, we end up with k centroids again, but these are now adjusted to reflect how the data points are distributed. We keep repeating steps 2 and 3 until we converge. In other words, we cease this iterative process when the centroids of the k clusters are no longer changing. And at that point we have “real” k clusters, with each data point belonging to one of them. Fortunately, with the right package in Python, we do not need to implement all of this from scratch. That right package is sklearn.cluster, which contains implementations for various clustering algorithms, including k-means. Let us import it: # Get KMeans class from clustering library available within scikit-learn from sklearn.cluster import KMeans For this exercise, we are just going to make up some data points in two-dimensional (2D) space (so we can visualize them easily): 150 Python # Define data points on 2D plane using Cartesian coordinates X = np.array([[1, 2], [5, 8], [1.5, 1.8], [8, 8], [1, 0.6], [9, 11]]) Now, we will proceed with clustering as well as visualizing the clusters that k-means algorithm generates: # Perform clustering using k-means algorithm kmeans = KMeans(n_clusters=2) kmeans.fit(X) # ‘kmeans’ holds the model; extract information about clusters # as represented by their centroids, along with their labels centroids = kmeans.cluster_centers labels = kmeans.labels print(centroids) print(labels) # Define a colors array colors = [“g.”, “r.”, “c.”, “y.”] # Loop to go through each data point, plotting it on the plane # with a color picked from the above list – one color per cluster for i in range(len(X)): print(“Coordinate:”,X[i], “Label:”, labels[i]) plt.plot(X[i][0], X[i][1], colors[labels[i]], markersize = 10) # Plot the centroids using “x” plt.scatter(centroids[:, 0],centroids[:, 1], marker=“x”, s=150, linewidths=2, zorder=10) plt.show() Figure 5.12 has the output. As you can see, there are six points plotted. It is easy to imagine that if we were to look for two clusters, we can have one group at the bottom-left (represented in red) and another group in top-right (represented in green). Here we have used the k-means algorithm, which aims to find k unique clusters where the center of each cluster (centroid) is the mean of the values in that cluster. These clusters are represented using blue “X” symbols. 151 5.6 Introduction to Machine Learning Now let us see what happens if we want three clusters. Change the n_clusters argument (input or parameter) in KMeans function to be 3. And voilà! Figure 5.13 shows how this algorithm can give us three clusters with the same six data points. You can even try n_clusters=4. So, you see, this is unsupervised learning because the labels or colors of the data points are not known to be able to put them in classes. In fact, we do not even know how many labels or classes there should be, and so we could impose almost as many of those as we like. But things are not always this simple. Because we were dealing with two dimensions and so few data points, we could even visually identify how many clusters could be appropriate. Nonetheless, the example above demonstrates how unsupervised clustering typically works. 12 10 8 6 4 2 0 0 2 4 6 8 10 Figure 5.12 Output of clustering with k-means (k = 2). 12 10 8 6 4 2 0 0 2 4 6 8 10 Figure 5.13 Output of clustering with k-means (k = 3). 152 Python Try It Yourself 5.7: Clustering For this homework, you are going to use the travel reviews dataset (see OA 5.6) in UCI’s repository, which was created by crawling the travelers’ reviews from TripAdvisor.com. Reviews on destinations across East Asia are considered in 10 categories. Each traveler’s rating is categorized as Terrible (0), Poor (1), Average (2), Very Good (3), or Excellent (4), and average rating is used against each category per user. Use the clustering method you just learned to group the destinations that have similar ratings. 5.6.4 Density Estimation (Unsupervised Learning) One way to think about clustering when we do not know howmany clusters we should have is to let a process look for data points that are dense together and use that density information to form a cluster. One such technique is MeanShift. Let us first understand what density information or function is. Imagine you are trying to represent the likelihood of finding a Starbucks in an area. You know that if it is a mall or a shopping district, there is a good probability that there is a Starbucks (or two or three!), as opposed to a less populated rural area. In other words, Starbucks has higher density in cities and shopping areas than in less populated or less visited areas. A density function is a function (think about a curve on a graph) that represents a relative likelihood of a variable (e.g., existence of Starbucks) to taking on a given value. Now let us get back to MeanShift. This is an algorithm that locates the maxima (max- imum values) of a density function given a set of data points that fit that function. So, roughly speaking, if we have data points corresponding to the locations of Starbucks, MeanShift allows us to figure out where we are likely to find Starbucks; or on the other hand, given a location, how likely are we to find a Starbucks. Hands-On Example 5.5: Density Estimation To see this in action, we will define a density function, have it generate a bunch of data points that fit that function, and then try to locate centroids of these data points using density estimation. This almost seems like a self-fulfilling prophecy! But it allows us to practice and see how unsupervised clustering works when we do not even know how many clusters there should be. Here is the code for this whole example, along with inline comments. The visual output as a 3D plot is also shown in Figure 5.14. import numpy as np from sklearn.cluster import MeanShift from sklearn.datasets.samples_generator import make_blobs import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D # Import style class from matplotlib and use that to apply 153 5.6 Introduction to Machine Learning ggplot styling from matplotlib import style style.use(“ggplot”) # Let’s create a bunch of points around three centers in a 3D # space X has those points and we can ignore y centers = [[1,1,1],[5,5,5],[3,10,10]] X, y = make_blobs(n_samples = 100, centers = centers, clus-ter_std = 2) # Perform clustering using MeanShift algorithm ms = MeanShift() ms.fit(X) # “ms” holds the model; extract information about clusters as # represented by their centroids, along with their labels centroids = ms.cluster_centers labels = ms.labels print(centroids) print(labels) # Find out how many clusters we created n_clusters_ = len(np.unique(labels)) print(“Number of estimated clusters:”, n_clusters_) # Define a colors array colors = [‘r’,‘g’,‘b’,‘c’,‘k’,‘y’,‘m’] # Let’s do a 3D plot fig = plt.figure() ax = fig.add_subplot(111, projection=‘3d’) 15 10 5 0 –5 15 10 5 0 –5108 64 20 –2–4 Figure 5.14 Density estimation plot with three clusters identified. 154 Python # Loop to go through each data point, plotting it on the 3D space # with a color picked from the above list – one color per cluster for i in range(len(X)): print(“Coordinate:”,X[i], “Label:”, labels[i]) ax.scatter(X[i][0], X[i][1], X[i][2], c=colors[labels [i]], marker=‘o’) ax.scatter(centroids[:,0],centroids[:,1],centroids [:,2], marker=“x”, s=150, linewidths=5, zorder=10) plt.show() This plot shows three clusters, but try running the program multiple times and you may find a different number of clusters. And you guessed it – that is because the data points may be slightly different, and how and where we start applying the MeanShift algorithm may differ. And this is where we are going to stop our exploration of applying machine learning to various data problems using Python. If you are interested in more, go to Part III of this book. But if you do not know enough about the R statistical tool, you should go through the next chapter first, because later, when we do machine learning, we are going to exclusively use R. Summary Python has recently taken the number one spot for programming languages, according to the IEEE.8 And that is not a surprise. It is an easy-to-learn, yet very powerful, language. It is ideal for data scientists because it offers straightforward ways to load and plot data, provides a ton of packages for doing data visualization to parallel processing, and allows easy integration to other tools and platforms.Want to do network programming? Python has got it. Care about object-oriented programming? Python has you covered.What about GUI? You bet! It is hard to imagine any data science book without coverage of Python, but one of the reasons it makes even more sense for us here is that, unlike some other programming languages (e.g., Java), Python has a very low barrier. One can start seeing results of various expressions and programming structures almost immediately without having to worry about a whole lot of syntax or compilation. There are very few programming environments that are easier than this.9 Not to mention, Python is free, open-source, and easily available. This may not mean much in the beginning, but it has implications for its sustainability and support. Python continues to flourish, be supported, and further enhanced due to a large community of developers who have created outstanding packages that allow a Python programmer to do all sorts of data processing with very little work. And such development continues to grow. 155 Summary Often students ask for a recommendation for a programming language to learn. It is hard to give a good answer without knowing the context (why do you want to learn program- ming, where would you use it, how long, etc.). But Python is an easy recommendation for all the reasons above. Having said that, I recommend not being obsessed with any programming tools or languages. Remember what they are – just tools. Our goal, at least in this book, is not to master these tools, but to use them to solve data problems. In this chapter, we looked at Python. In the next, we will explore R. In the end, you may develop a preference for one over the other, but as long as you understand how these tools can be used in solving problems, that is all that matters. Key Terms • Integrated Development Environment (IDE): This is an application that contains various tools for writing, compiling, debugging, and running a program. Examples include Eclipse, Spyder, and Visual Studio. • Correlation: This indicates how closely two variables are related and ranges from −1 (negatively related) to +1 (positively related). A correlation of 0 indicates no relation between the variables. • Linear regression: Linear regression is an approach to model the relationship between the outcome variable and predictor variable(s) by fitting a linear equation to observed data. • Machine learning: This is a field that explores the use of algorithms that can learn from the data and use that knowledge to make predictions on data they have not seen before. • Supervised learning: This is a branch of machine learning that includes problems where a model could be built using the data and true labels or values. • Unsupervised learning: This is a branch of machine learning that includes problems where we do not have true labels for the data to train with. Instead, the goal is to somehow organize the data into some meaningful clusters or densities. • Predictor variable:A predictor variable is a variable that is being used to measure some other variable or outcome. In an experiment, predictor variables are often independent variables, which are manipulated by the researcher rather than just measured. • Outcome or response variable: An outcome or response variable is in most cases the dependent variable, which is observed and measured by changing the independent variable(s). • Classification: In our context, a classification task represents the systematic arrangement of data points in groups or categories according to some shared qualities or characteristics. These groups or categories have predefined labels, called class labels. • Clustering:Clustering involves the grouping of similar objects into a set, called a cluster. The clustering task is similar to classification without the predefined class labels. 156 Python • Density estimation: This is a machine learning (typically, unsupervised learning) exam- ple where we try to explain the data by estimating underlying processes that may be responsible for how the data is distributed. Conceptual Questions 1. List arithmetic operators that you can use with Python. 2. List three different data types. 3. How do you get user input in Python? Hands-On Problems Problem 5.1 Write a Python script that assigns a value to variable “age” and uses that information about a person to determine if he/she is in high school. Assume that for a person to be in high school, their age should be between 14 and 18. You do not have to write a complicated code – simple and logical code is enough. Problem 5.2 The following are weight values (in pounds) for 20 people: 164, 158, 172, 153, 144, 156, 189, 163, 134, 159, 143, 176, 177, 162, 141, 151, 182, 185, 171, 152. Using Python, find the mean, median, and standard deviation; and then plot a histogram. Problem 5.3 You are given a dataset named boston (OA 5.7). This dataset contains information collected by the US Census Service concerning housing in the area of Boston, Mass. The dataset is small in size, with only 506 cases. The data was originally published by Harrison, D., & Rubinfeld, D.L. (1978). Hedonic prices and the demand for clean air. Journal of Environmental Economics and Management, 5, 81–102. Here are the variables captured in this dataset: CRIM – per capita crime rate by town ZN – proportion of residential land zoned for lots over 25,000 sq.ft. INDUS – proportion of non-retail business acres per town. CHAS – Charles River dummy variable (1 if tract bounds river; 0 otherwise) 157 Hands-On Problems NOX – nitric oxides concentration (parts per 10 million) RM – average number of rooms per dwelling AGE – proportion of owner-occupied units built prior to 1940 DIS – weighted distances to five Boston employment centres RAD – index of accessibility to radial highways TAX – full-value property-taxrate per $10,000 PTRATIO – pupil-teacher ratio by town B – 1000(Bk – 0.63)^2 where Bk is the proportion of blacks by town LSTAT – % lower status of the population MEDV – median value of owner-occupied homes in $1000’s Using appropriate correlation and regression tests, find which of the variables is the best predictor of NOX (nitric oxides concentration). For that model, provide the regression plot and equation. Using appropriate correlation and regression tests, find which of the variables is the best predictor of MEDV (median home value). For that model, provide the regression plot and equation. Problem 5.4 You have experienced a classification method, kNN classifier, in the class. A classification method or algorithm is developed aiming to address different types of problems. As a result, different classifiers show different classification results, or accuracy. The goal of this assignment is to compare the different accuracy over the classifiers. The dataset to use for this assignment is Iris, which is a classic and very easy multiclass classification dataset. This dataset consists of three different types of irises’ (setosa, versicolor, and virginica) petal and sepal length, stored in a 150 × 4 numpy.ndarray. The rows are the samples and the columns are: Sepal length, Sepal width, Petal length, and Petal width. You can load the iris dataset through the Python code below: from sklearn import datasets iris = datasets.load_iris() You already know how to use kNN. Let’s try another classifier: Support Vector Machine (SVM). To use SVM as a classifier, you can use the following code. from sklearn.svm import SVC # importing the package SVC(kernel=“linear”) # building the classifier Classes 3 Samples per class 50 Samples total 150 Dimensionality 4 Features Real, positive 158 Python The second line will give you a classifier that you can store and process further just like you do with a kNN-built classifier. Here we are saying that we want to use a linear kernel for our SVM. Other options are “rbf” (radial basis function) and “poly” (polynomial). Try each of these and see what accuracies you get. Note that every time you run your program, you may get slightly different numbers, so try running it a few times: For the classification, take only the first two columns from the dataset. Split the dataset into 70% for training and 30% for test. Show the resulting accuracies from kNN and three variations of SVM. Problem 5.5 Let us work with Iris data again. For the previous question, we used it for doing classifica- tion. Now we will do clustering. First, load the data and extract the first two features. Now, do flat clustering using k-means. You can decide how many clusters are appro- priate. For this, you may like to plot the data first and see how it is scattered. Show the plot with clusters marked. Having done both classification and clustering on the same dataset, what can you say about this data and/or the techniques you used? Write your thoughts in one or two paragraphs. Problem 5.6 For this exercise, you need to work with the breast cancer Coimbra dataset. First download the dataset from OA 5.8 and load the data. The dataset has 10 features including the class labels (1 or 2). Next, you need to round off the Leptin feature values to two decimal places. Having done that, use the first nine attributes (dataset minus the class label) to group the data points into two clusters. You can use any clustering algorithm of your choice, but the number of clusters should remain the same. Once the clustering is complete, use the class labels to evaluate the accuracy of the clustering algorithm that you chose. Further Reading and Resources If you want to learn more about Python and its versatile applications, here are some useful resources. Python tutorials: • https://www.w3schools.in/python-tutorial/ • https://www.learnpython.org/ • https://www.tutorialspoint.com/python/index.htm • https://www.coursera.org/learn/python-programming • https://developers.google.com/edu/python/ • https://wiki.python.org/moin/WebProgramming 159 Hands-On Problems Hidden features of Python: • https://stackoverflow.com/questions/101268/hidden-features-of-python DataCamp tutorial on Pandas DataFrames: • https://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python Notes 1. Python download: https://www.python.org/downloads/ 2. PyDev: http://www.pydev.org 3. Python: https://www.python.org/downloads/ 4. Anaconda Navigator: https://anaconda.org/anaconda/anaconda-navigator 5. Daily Demand Forecasting Orders dataset: https://archive.ics.uci.edu/ml/machine-learning- databases/00409/Daily_Demand_Forecasting_Orders.csv 6. GitHub: http://vincentarelbundock.github.io/Rdatasets/csv/datasets/longley.csv 7. Notice that the predictor variable X is in uppercase and the outcome y is in lowercase. This is on purpose. Often, there are multiple predictor variables, making X a vector (or a matrix), whereas most times (and perhaps for us, all the time), there will be a single outcome variable. 8. IEEE (Python #1): http://spectrum.ieee.org/computing/software/the-2017-top-programming- languages 9. Yes, there are easier and/or more fun ways to learn/do programming. One popular example is Scratch: https://scratch.mit.edu/. 160 Python 6 R “Not everything that can be counted counts, and not everything that counts can be counted.” — Albert Einstein What do you need? • Computational thinking (refer to Chapter 1). • Ability to install and configure software. • Knowledge of basic statistics, including correlation and regression. • (Ideally) Prior exposure to any programming language. What will you learn? • Loading structured data into R. • Using R to do statistical analysis, including producing models and visualizations. • Applying introductory machine learning techniques such as classification and clustering with R to various data problems. 6.1 Introduction While a versatile programming language such as Python can provide a framework to work with data and logic effectively, often we want to stay focused on data analysis. In other words, we could use a programming environment that is designed for handling data and is not concerned with programming so much. There are several such environments or packages available – SPSS, Stata, and Matlab. But nothing can beat R for a free, open-source, and yet a very powerful data analytics platform. And just because R is free, do not think even for a second that it is somehow inferior. R can do it all – from simple math manipulations to advanced visualization. In fact, R has become one of the most-used tools in data science and not just because of its price. This chapter provides an introduction to the R language with its syntax, a few examples, and how it can be integrated with Python. Together, Python and R are, perhaps, the two most important tools in data science. 161 6.2 Getting Access to R R is an open-source software for statistical computing, and it is available on all the major platforms for free. If you have ever used or heard about Matlab, SPSS, SAS, etc., well, R can do those kinds of things: statistics, data manipulation, visualization, and running data mining and machine learning algorithms. But it costs you nothing and you have an amazing set of tools at your disposal that is constantly expanding. There is also an active community building and supporting these tools. R offers you all the power of statistics and graphics that you can handle. No wonder it is becoming an industry standard for scientific computing. You can download R from the R site.1 There you can also read up on R and related projects, join mailing lists, and find all the help you need to get started and do amazing things. Appendix D has more details for downloading and installing, if you like. Once downloaded and installed, start the R program. You will be presented with the R console (Figure 6.1). Here, you can run R commands and programs that we will see in the next sections ofthis chapter. To leave the program, enter q(). Figure 6.1 A screenshot of the R console. 162 R But, as before, instead of using this program directly, we will take advantage of an Integrated Development Environment (IDE). In this case, it is RStudio. So, go to RStudio2 and pick the version that matches your operating system. Again, once you have downloaded and installed RStudio, go ahead and start it up. You should see multiple windows or panes (see Figure 6.2), including the familiar R console where you can run R commands. As with the regular R, you can enter q() on the RStudio console and exit. 6.3 Getting Started with R Let us get started with actually using R now, and we will do it the way we have been learning in this book – using hands-on exercises. 6.3.1 Basics Let us start with things that are similar to what we did on the Python console. Everything below that you see with “>” is what you would enter at the command prompt (“>”) in R (or RStudio). The lines with bracketed numbers (e.g., “[1]”) show outputs from running those commands. Figure 6.2 A screenshot of RStudio. 163 6.3 Getting Started with R > 2+2 [1] 4 > x=2 > y=2 > z=x+y > z[1] 4 While these may be self-explanatory, let us still go ahead and explain them. We started by simply entering a mathematical expression (2+2) and R executes it, giving us the answer. Next, we defined and assigned values to variables x and y. Similar to other scripting languages such as Python, we do not need to worry about the data type of a variable. Just as we can assign a value to a variable (e.g., “x=2”), we can also assign a mathematical expression to a variable (above, “z=x+y”) and, in that case, R will compute that expression and store the result in that variable. If you want to see what is in a variable, simply enter the name of that variable on the prompt. We should note here that you could also assign values to a variable using “<-” like this: > a<-7 In this book, you will see me using both of these notations for assigning values and writing expressions. Next, let us work with logical operations. The most common logical operators should sound quite familiar: “>”, “<”, “>=”, and “<=”. What may not be so apparent, especially if you have not done much or any programming before, are operators for comparison (“==”) and for negation (“!=”). Let us go ahead and practice some of these: > 2>3 [1] FALSE > 2==2 [1] TRUE Asyou can see, the result of a logical operation is aBoolean value – “TRUE” or “FALSE.” Hands-On Example 6.1: Basics Now, let us write a small program on the R console: > year = 2020 > if (year%%4==0) + print (“Leap year”) [1] “Leap year” Here, we assigned a value to variable “year” and checked if it is divisible by 4 using the modulus operator (%%). If it is, then declare the year to be a leap year. We have done this before, but now we see how the same problem can be solved using R. Now let us put this code in a file. In RStudio, select “File > New File > R Script.” This should open an editor where you can type your code. There, write the following: 164 R year = 2020 if (year%%4==0) { print (“Leap year”) } else { print (“Not a leap year”) } Save the file. Typically, R scripts have an “.r” extension. You can run one line at a time by putting your cursor on that line and hitting the “Run” button in the toolbar of the editor. If you want to run the whole script, just select it all (“Ctrl+A” on a PC or “Cmd+A” on a Mac) and click “Run.” The output will be in the console. Now that you know how to put R code in a file and run it, you can start taking advantage of existing R scripts and even creating your own. Try It Yourself 6.1: Basics 1. Use R to calculate the value of the following equation: ((2 × 7) + 12)2 2. Use the if–else structure to check if the value from the last question is divisible by 3. If the number is divisible by 3, check if the same number is also divisible by 4. If the number is not divisible by either 3 or 4, print the immediate larger number that is divisible by both 3 and 4. 6.3.2 Control Structures Just like Python, R supports some basic control structures. In general, the control structures are of two types: decision control structure and loop control structure. As the name suggests, if you want to decide if a statement or a set of statements can be executed or not, based on some condition, you need a decision control structure, for example, an “if–else” block. But if you need the same set of statements to be executed iteratively as long as the decision condition remains true, you would want loop control structures, for example, a “for loop,” “do–while” loop, etc. Let us look at a few examples. Say you would like to decide if the weather is OK for a bicycle trip based on the percentage of humidity present, and you want to write code for this. It may look something like: humidity = 45 if (humidity<40) { print (“Perfect for a trip”) } else if (humidity>70) { print (“Not suitable for a trip”) } else { print (“May or may not be suitable for a trip”) } 165 6.3 Getting Started with R As shown in the above lines of code, the three conditions based on which decision to be made is defined here as “humidity less than 40%,” or “more than 70%,” or the rest. Hands-On Example 6.2: Control Structures We will now work on an extended example to practice loop control structures. What if you have accurate predictions of humidity for the next seven days. Wouldn’t it be good if you could make some decisions for all seven days? Here is how to do it: # This is to store the humidity percentages in a vector humidity <- c(20, 30, 60, 70, 65, 40, 35) count <- 1 while (count <= 7) { cat (“Weather for day ”, count, “:”) if (humidity[count] < 40) { print (“Perfect for a trip”) } else if (humidity[count] > 70) { print (“Not suitable for a trip!”) } else { print (“May or may not be suitable for trip”) } count = count + 1 } The same objective can be achieved with a “for loop” as well. Here is a demonstration: # This is to store the humidity percentages in a vector humidity <- c(20, 30, 60, 70, 65, 40, 35) for (count in 1:7){ cat (“Weather for day ”, count, “:”) if (humidity[count] < 40) { print (“Perfect for a trip”) } else if (humidity[count] > 70) { print (“Not suitable for a trip!”) } else { print (“May or may not be suitable for trip”) } } Stop here for a moment and make sure all of the code above makes sense to you. Of course, the best way to ensure this is to actually try it all by yourself and make some changes to see if your logic works out the right way. Recall our discussion on computational thinking from the first chapter in this book. This will be a good place to practice it. 166 R Try It Yourself 6.2: Control Structures 1. Use a “for loop” to print all the years that are leap years between 2008 and 2020. 2. Use a “while loop” to calculate the number of characters in the following line: “Today is a good day.” 6.3.3 Functions As you know, functions allow us to store a procedure, or a computational or a logical block that can be reused. It is like a chef pre-making broth that she can then keep using in different dishes throughout the day without having to make it every time a recipe calls for it. Hands-On Example 6.3: Functions Let us start by creating a function that can be reused. Here is a function named “Square” that takes a value (often called parameter or input) and returns its square. Square = function(x) { return (x*x) } Write this code in a file and save it. Now run the whole script. Nothing will happen. But now when you go to the console and run a command like “Square(4)”, you should see an appropriate answer. What is happening? You have guessed it! We just created a function called “Square” and it is ready to be used anytime we need that kind of functionality. In fact, you can see in the “Environment” pane of RStudio that we have this particular function available to us. Try It Yourself 6.3: Functions Write a function that takes two values as inputsand returns the one that is the smaller of those two. Show how you could use this function by writing some code that calls the function and outputs the result. 6.3.4 Importing Data Now we come to the most useful parts of R for data science. Almost none of our actual problem-solving will work without first having some data available to us in R. Let us see how to import data into R. For now, we will work with CSV data, as we have done before. R has a function “file.choose()” that allows you to pick out a file from your computer and a “read.table()” function to read that file as a table, which works great for csv-formatted data. 167 6.3 Getting Started with R For this example, we will use the IQ data file (iqsize.csv), available from OA 6.7. Type the following code line by line, or save it as an R script and run one line at a time: df = read.table(file.choose(),header=TRUE,sep=“,”) brain = df[“Brain”] print(summary(brain)) Running the first line brings up a file selection box. Navigate to the directory where iqsize.csv is stored and select it. This is the result of the “file.choose()” function, which is the first argument (parameter, input) in the “read.table()” function on that first line. Alternatively, you can put the file name (with full path) in quotes for that first argument. The second argument means we are expecting to see column labels in the first row (header), and the third argument indicates how the columns are separated. Once we have the data loaded in our dataframe (df) variable, we can process it. In the second line, we are selecting the data stored in “Brain” column. And then in the third line we are using the function “summary()” so we can obtain some basic statistical characteristics of this dataframe and print them, as seen below: Brain Min. : 79.06 1st Qu.: 85.48 Median : 90.54 Mean : 90.68 3rd Qu.: 94.95 Max. :107.95 Do these quantities look familiar? They should if you know your basic statistics or have reviewed descriptive statistics covered earlier in this book! 6.4 Graphics and Data Visualization One of the core benefits of R is its ability to provide data visualizations with very little effort, thanks to its built-in support, as well as numerous libraries or packages and functions available from many developers around the world. Let us explore this. 6.4.1 Installing ggplot2 Before working with graphics and plotting, let us make sure we have the appro- priate libraries. Open RStudio and select Tools > Install Packages. In the dialog box that pops up, make sure CRAN repository is selected for the installation source. Now, type “ggplot2” in the packages box. Make sure “Install dependencies” is checked. Hit “Install” and the ggplot2 package should be downloaded and installed. 168 R FYI: Dataframes We saw the idea of a dataframe in the Python chapter. In a way, it is the same for R, that is, a dataframe is like an array or a matrix that contains rows and columns. There are a couple of key differences, however. First, in R, a dataframe is inherently available without having to load any external packages like we did for Python with Pandas. The second big difference is how elements in a dataframe are addressed. Let us see this using an example. We will use one of the in-built dataframes called “mtcars”. This dataframe has car models as rows and various attributes about a car as columns. If you want to find the mpg for a Fiat 128 model, you can enter > mtcars[‘Fiat 128’,‘mpg’] If you want the whole record for that car, you can enter > mtcars[‘Fiat 128’,] In other words, you are referring to a specific row and all corresponding columns with the above addressing. Of course, you can also address an element in a dataframe using an index such as mtcars[12,1], but, as you can see, addressing rows, columns, or a specific element by name make things a lot more readable. If you are interested in exploring dataframes in R, you may want to look at some of the pointers for further reading at the end of this chapter. 6.4.2 Loading the Data For the examples in this section, we will work with customer data regarding health insurance. It is available from OA 6.1. The data is in a file named custdata.tsv. Here, “tsv” stands for tab-separated values. That means, instead of commas, the fields are separated using tabs. Therefore, our loading command will become: custdata = read.table(‘custdata.tsv’,header=T,sep=‘\t’) Here, ‘\t’ indicates the tab character. The above command assumes that the custdata.tsv file is in the current directory. If you do not want to take chances with that, you can replace the file name with the “file.choose()” function, so when the “read.table()” function is run, a file navigation box will pop up, allowing you to pick out the data file from your computer. That line will look like: custdata = read.table(file.choose(),header=T,sep=‘\t’) 6.4.3 Plotting the Data Let us start with a simple histogram of our customers’ ages. First, we need to load the “ggplot2” library and use its histogram function, like this: 169 6.4 Graphics and Data Visualization library(ggplot2) ggplot(custdata) +geom_histogram(aes(x=age), binwidth=5, fill=“blue”) This generates a nice histogram (Figure 6.3). In the code, “binwidth” indicates the range that each bar covers on the x-axis. Here, we set it to 5, which means it would look at ranges such as 0–5, 6–10, and so on. Each bar on the plot then represents how many items or members fall within a given range. So, as you can imagine, if we increase the range, we get more items per bar and the overall plot gets “flatter.” If we reduce the range, fewer items fit in one bar and the plot looks more “jagged.” But did you see how easy creating such a plot was? We typed only one line. Now, if you have never used such a statistical package before and relied on spreadsheet programs to create graphs, you might have thought it was easy to create a graph with those programs using point and click. But was it difficult to type that one line here? Besides, with this one line, we can more easily control how the graph looks than how you could with that point and click. And, maybe it is just me, but I think the result looks a lot more “professional” than what one could get from those spreadsheet programs! Here, you can see that the histogram function has an argument for what the x-axis will show, and how many data points to fit in each bin. Let us now look at a field with categorical values. The data, “marital.stat,” is like that. We can use a bar chart to plot the data (Figure 6.4). 100 75 50 25 0 50 age co u n t 100 150 0 Figure 6.3 Histogram showing customer age distribution. 170 R ggplot(custdata) + geom_bar(aes(x=marital.stat), fill=“blue”) So, is histogram the only type of chart available in R? Of course not. There are many other types of charts one could draw. For example, you can draw a pie chart to plot the distribution of the housing types, even though pie charts are not recom- mended in the standard R documentation, and the features available are somewhat limited. Here is how to do it. First, you need to build a contingency table of the counts at each factor level. contigencyTable <- table(custdata$housing.type) Now, you can use the pie() function in R to draw the pie chart of housing types (Figure 6.5). pie(contigencyTable, main=“Pie Chart of Housing Types”) If you do not like the default color scheme of pie() in R, you can select different color variations from the available, but you may need to specify the numbers of colors you need according to the number of pie slices in your chart. Here is how to do it. pie(contigencyTable, main=“Pie Chart of Housing Types”, col = rainbow(length(contigencyTable))) 500 400 300 200 100 0 Divorced/Separated Married Never Married Widowed marital.stat co u n t Figure 6.4 Bar plot showing distribution of marital status in the customer data. 171 6.4 Graphics and Data Visualization Hands-On Example 6.4: Plotting In the last two visualizations, one similaritythat stands out is the discreteness of the data points. But what if all the data points belong to a sequence, and you want to preserve the sequential nature of the data in your visualization? One possible solution is a line graph. Let us take an example. Table 6.1 contains a dataset on average stock price of a company, X, in the first five hours of trading in the stock exchange. You can obtain it from OA 6.2. If you are interested in the progress of the stock price of the company, you can use a line graph to visualize the situation. The following lines of code should do the job. Homeowner with mortgage/loan Occupied with no rent Homeowner free and clear Rented Figure 6.5 Pie chart of housing types. Table 6.1 Average stock price (in USD) of company X in first five hours of trading. Hour of operation Average stock price in USD 1 12.04 2 12.80 3 13.39 4 13.20 5 13.23 172 R stock <- read.csv(‘stocks.csv’, header = TRUE, sep = “,”) plot(stock$Average.stock.price.in.USD., type = “o”, col = “red”, xlab = “Hours of operation”, ylab = “Average stock price”) The above lines should generate the line graph in Figure 6.6. There are many other types of charts and different variations of the same chart that one can draw in R. But we will stop at these four here and look at some other things we could do and plot with the data we have. We will start by first loading a dataset about customer age and income: custdata <- read.csv(‘custdata.tsv’, header = TRUE, sep = “\t”) Let us find a correlation between age and income, which will tell us how much and in which ways age and income are related. It is done using a simple command: cor(custdata$age, custdata$income) This gives a low correlation of 0.027. That means these two variables do not relate to one another in anymeaningful way. Butwait aminute. A careful examination of the data tells us that there are some null values, meaning some of the ages and incomes are reported to be 0. That cannot be true, and perhaps this is a reflection of missing values. So, let us redo the correlation, this time picking the values that are non-zero. Here is how we could create such a subset: 13.4 13.2 13.0 12.8 12.6 12.4 12.2 12.0 1 2 3 Hours of operation A ve ra ge s to ck p ric e 4 5 Figure 6.6 Line graph of average stock prices against the hour of operation. 173 6.4 Graphics and Data Visualization custdata2 <- subset(custdata, (custdata$age > 0 & custdata$age < 100 & custdata$income >0)) The subset() function allows us to take a sample of the data according to the specified conditions (in this case, “age” being greater than 0 and less than 100). And now let us do that correlation calculation again: cor(custdata2$age, custdata2$income) This time we get −0.022. That is still pretty low, but you see the sign has changed? As we move forward with other forms of data analytics (with R or any other tool), pay attention to the nature of the data. It is not always clean or right, and if we are not careful, wemay end up with results that either do not make sense, or worse, are dead wrong. Try It Yourself 6.4: Correlation In this exercise you are going to use the cloth dataset available from OA 6.3. This dataset has measurement of cloth dimension (x) and number of flaws (y) in the piece. Use this dataset to probe the relation between number of flaws in a cloth and its dimension. Are these two related? If yes, then to what extent and in which direction? 6.5 Statistics and Machine Learning We reviewed statistical concepts in Chapter 3 and saw how we could use them (and some more) using Python in the previous chapter. Now, it is time to use R to do the same or similar things. So, before you begin this section, make sure that you have at least reviewed statistical concepts. In addition, we will see how some of the basic machine learning techniques using R could help us solve data problems. Regardingmachine learning, I would suggest reviewing the introductory part of the previous chapter. 6.5.1 Basic Statistics We will start with getting some descriptive statistics. Let us work with “size.csv” data, which you can download from OA 6.4. This data contains 38 records of different people’s sizes in terms of height and weight. Here is how we load it: size = read.table(‘size.csv’,header=T,sep=‘,’) Once again, this assumes that the data is in the current directory. Alternatively, you can replace “size.csv” with “file.choose()” to let you pick the file from your hard drive when you run this line. Also, while you can run one line at a time on your console, you could type them and save as an “.r” file, so that not only can you run line-by-line, but you can also store the script for future runs. 174 R Either way, I am assuming at this point that you have the data loaded. Now, we can ask R to give us some basic statistics about it by running the summary command: summary(size) Height Weight Min. :62.00 Min. :106.0 1st Qu.:66.00 1st Qu.:135.2 Median :68.00 Median :146.5 Mean :68.42 Mean :151.1 3rd Qu.:70.38 3rd Qu.:172.0 Max. :77.00 Max. :192.0 The output, as shown above, shows descriptive statistics for the two variables or columns we have here: “Height” and “Weight.”We have seen such output before, so I will not bother with the details. Let us visualize this data on a scatterplot. In the following line, “ylim” is for specifying minimum and maximum values for the y-axis: library(ggplot2) ggplot(size, aes(x=Height,y=Weight)) + geom_point() + ylim (100,200) The outcome is shown in Figure 6.7. Once again, you have got to appreciate how easy it is with R to produce such professional-looking visualizations. 200 175 150 125 100 65 70 Height W ei gh t 75 Figure 6.7 Scatterplot of Height vs. Weight. 175 6.5 Statistics and Machine Learning 6.5.2 Regression Now that we have a scatterplot, we can start asking some questions. One straightforward question is: What is the relationship between the two variables we just plotted? That is easy. With R, you can keep the existing plotting information and just add a function to find a line that captures the relationship: ggplot(size, aes(x=Height,y=Weight)) + geom_point()+ stat_smooth(method=“lm”) + ylim(100,200) Compare this command to the one we used above for creating the plot in Figure 6.7. You will notice that we kept all of it and simply added a segment that overlaid a line on top of the scatterplot. And that is how easy it is to do basic linear regression in R, a form of supervised learning. Here, “lm”method refers to linear model. The output is in Figure 6.8. You see that blue line? That is the regression line. It is also a model that shows the connection between “Height” and “Weight” variables. What it means is that if we know the value of “Height,” we could figure out the value of “Weight” anywhere on this line. Want to see the line equation? Use the “lm” command to extract the coefficients: lm(Weight ~ Height, size) And here is the output. Call: lm(formula = Weight ~ Height, data = size) Coefficients: (Intercept) Height -130.354 4.113 200 175 150 125 100 65 70 Height W ei gh t 75 Figure 6.8 Linear regression connecting Height to Weight. 176 R You can see that the output contains coefficients for the independent or predictor variable (Height) and the constant or intercept. The line equation becomes: Weight = -130.354 + 4.113*Height Try plugging in different values of “Height” in this equation and see what values of “Weight” you get and how close your predicted or estimated values are to reality. With linear regression, we managed to fit a straight line through the data. But perhaps the relationship between “Height” and “Weight” is not all that straight. So, let us remove that restriction of linear model: ggplot(size, aes(x=Height,y=Weight)) + geom_point() + geom_ smooth() + ylim(100,200) And here is the output (Figure 6.9). As you can see, our data fits a curved line better than a straight line. Yes, the curved line fits the data better, and it may seem