The amount of unstructured data available to software engineering researchers in versioning systems, issue trackers, achieved communications, etc is continuously growing over time. The mining of such data represents an unprecedented opportunity for researchers to investigate new research questions and to build a new generation of recommender systems supporting development and maintenance activities. This talk describes works on the application of Mining Unstructured Data (MUD) in software engineering. The talk briefly reviews the types of unstructured data available to researchers providing pointers to basic mining techniques to exploit them. Then, an overview of the existing applications of MUD in software engineering is provided with a specific focus on textual data present in software repositories and code components. The talk also discusses perils the "miner" should avoid while mining unstructured data and lists possible future trends for the field.
Views: 230 SANER2016 FOSE
This presentation was recorded at GOTO Berlin 2016 http://gotober.com Elmar Juergens - Consultant at CQSE GmbH ABSTRACT If the team architecture and the technical architecture do not fit together, problems arise. Both architectures evolve, however, often causing misalignment. How can we notice such mismatches and react in time? In this talk, I present modern [...] Download slides and read the full abstract here: https://gotocon.com/berlin-2016/presentations/show_talk.jsp?oid=8030 https://twitter.com/gotober https://www.facebook.com/GOTOConference http://gotocon.com
Views: 1454 GOTO Conferences
What is SOFTWARE REPOSITORY? What does SOFTWARE REPOSITORY mean? SOFTWARE REPOSITORY meaning - SOFTWARE REPOSITORY definition - SOFTWARE REPOSITORY explanation. Source: Wikipedia.org article, adapted under https://creativecommons.org/licenses/by-sa/3.0/ license. A software repository is a storage location from which software packages may be retrieved and installed on a computer. Many software publishers and other organizations maintain servers on the Internet for this purpose, either free of charge or for a subscription fee. Repositories may be solely for particular programs, such as CPAN for the Perl programming language, or for an entire operating system. Operators of such repositories typically provide a package management system, tools intended to search for, install and otherwise manipulate software packages from the repositories. For example, many Linux distributions use Advanced Packaging Tool (APT), commonly found in Debian based distributions, or yum found in Red Hat based distributions. There are also multiple independent package management systems, such as pacman, used in Arch Linux and equo, found in Sabayon Linux. As software repositories are designed to include useful packages, major repositories are designed to be malware free. If a computer is configured to use a digitally signed repository from a reputable vendor, and is coupled with an appropriate permissions system, this significantly reduces the threat of malware to these systems. As a side effect, many systems that have these capabilities do not require anti-malware software such as anti-virus software. Most major Linux distributions have many repositories around the world that mirror the main repository. A package management system is different from a package development process. A typical use of a package management system is to facilitate the integration of code from possibly different sources into a coherent stand-alone operating unit. Thus, a package management system might be used to produce a distribution of Linux, possibly a distribution tailored to a specific restricted application. A package development process, by contrast, is used to manage the co-development of code and documentation of a collection of functions or routines with a common theme, producing thereby a package of software functions that typically will not be complete and usable by themselves. A good package development process will help users conform to good documentation and coding practices, integrating some level of unit testing. The table below provides examples of package development processes.
Views: 2165 The Audiopedia
This video explains ETL (Extract Transform Load), a data-warehousing term for working, transforming and finally loading data. View more information about ETL in the Cloud at https://timelinepi.com/knowledgebase/etl/ Want to learn more about TimelinePI? Learn more here https://timelinepi.com/request-more-information-youtube/
Views: 60 TimelinePI
This video explains how to setup RapidMiner Studio, one of the most popular data mining software. It covers getting the software, organizing the RapidMiner project repository, as well as, installing RapidMiner with some of the commonly used extensions. The links to data sets for the following lessons will be provided for each lesson, all of them can be found in the following location: * http://visanalytics.org/youtube-rsrc/rm-data/ As ore lessons are added, more data will be uploaded to this web directory. Videos in data analytics and data visualization by Jacob Cybulski, visanalytics.org.
Views: 912 ironfrown
CROSSMINER enables the monitoring, in-depth analysis and evidence-based selection of open source components, and facilitates knowledge extraction from large open-source software repositories. In this talk, I will present you the overall process to achieve this challenge.
Views: 19 OW2
Robotic Process Automation (RPA) requires an in-depth knowledge of the current process performance. This is where process discovery with TimelinePI comes in in order to baseline and also the ability to identify suboptimal execution patterns which are going to be the candidates for automation. Learn more about how process mining and robotic process automation are linked in this video. Want to learn more about TimelinePI? Learn more here https://timelinepi.com/request-more-information-youtube/
Views: 224 TimelinePI
Learn how metrics on TimelinePI can be used to start to get a better understanding of your data as opposed to scrolling through thousands of records. Want to learn more about TimelinePI? Learn more here https://timelinepi.com/request-more-information-youtube/
Views: 18 TimelinePI
More videos like this at http://www.theurbanpenguin.com In this, week 9 of this series, we look at software installation sources or repositories. This are locations such as network shares or DVDs where we can install from. We take a standard installation disk and copy it to a file and use this file as an installation source for openSUSE 11.4
Views: 15076 theurbanpenguin
Ambika Tripathi, Savita Dabral and Ashish Sureka, University-Industry Collaboration and Open Source Software (OSS) Dataset in Mining Software Repositories (MSR) Research, International Workshop on Software Analytics (SWAN 2015) co-located with International Conference on Software Analysis, Evolution, and Reengineering (SANER 2015)
Views: 227 ashishsurekadelhi
In some environments, there are multiple Repository Services (RS) and it is a tedious job to find which pmrepagent process belongs to which RS. And, more difficult when either adminconsole is not running or user does not have privileges to login to admin console. Video demonstrates, how to obtain RS name from arguments of pmrepagent process. And how to decode the encrypted arguments of pmrepagent/pmserver process.
Views: 1385 Informatica Support
This video explores the KNIME Node Repository to show the features and modules available in the KNIME Analytics Platform. We start with IO, moving to Mining and Statistics, through ETL and Data Manipulation, Data Views, Tool & Script Integration, and many more. - Installation of KNIME Analytics Platform on Linux available at https://youtu.be/wibggQYr4ZA - Installation of KNIME Analytics Platform on Windows available at https://youtu.be/yeHblDxakLk - Installation of KNIME Analytics Platform on Mac available at https://youtu.be/1jvRWryJ220 - "What is a node, what is a workflow" https://youtu.be/M4j5jQBTEsM Next: - "The EXAMPLES Server" https://youtu.be/CRa_SbWgmVk - "Workflow Coach: The Wisdom of the KNIME Crowd" https://youtu.be/RusMXn-shsQ
Views: 4188 KNIMETV
Dr. Sheriff Rahuman from the European Bioinformatics Institute (EMBL-EBI) presents on data-rich public repositories including IntAct, Reactome, Complex Portal, OmicsDI, and Expression Atlas, including the types of data available in each database and the functionality of the platforms. Dr. Rahuman explores the utility of BioModels, a public repository of biomodels, explaining the curation and annotation process that the BioModels team uses to ensure reproducibility of submitted models. Dr. Henning Hermjakob joins for discussion of BioModels utility. Seminar participants explore ideas of how best to gather and incorporate data for their research, and comment on the importance of spatial considerations in cell models and the need for improved annotation.
Views: 115 Center for Reproducible Biomedical Modeling
In this video, you will learn how to create and edit an extraction form in SRDR (the Systematic Review Data Repository). This demo will guide you through each step of the form creation process. SRDR is a web-based tool for the extraction and management of data for systematic reviews and meta-analyses. It is also an open and searchable archive of systematic review and meta analysis data.
Views: 2888 TuftsEPC
Take the Full Course of Datawarehouse What we Provide 1)22 Videos (Index is given down) + Update will be Coming Before final exams 2)Hand made Notes with problems for your to practice 3)Strategy to Score Good Marks in DWM To buy the course click here: https://goo.gl/to1yMH or Fill the form we will contact you https://goo.gl/forms/2SO5NAhqFnjOiWvi2 if you have any query email us at [email protected] or [email protected] Index Introduction to Datawarehouse Meta data in 5 mins Datamart in datawarehouse Architecture of datawarehouse how to draw star schema slowflake schema and fact constelation what is Olap operation OLAP vs OLTP decision tree with solved example K mean clustering algorithm Introduction to data mining and architecture Naive bayes classifier Apriori Algorithm Agglomerative clustering algorithmn KDD in data mining ETL process FP TREE Algorithm Decision tree
Views: 101979 Last moment tuitions
Step by step guide to GPU mining equihash based alt coins on linux using Ubuntu 16.04. Need to know how to install ubuntu? Check out our triple boot guide: https://youtu.be/VrkhWZ8-zvM Link to code repo used in the video: https://github.com/createthis/linux_gpu_mining FAQ: Q. How do I install the nvidia drivers? A. I show how to install the nvidia drivers here: https://youtu.be/VrkhWZ8-zvM?t=10m57s Q. I have more than two GPUs. How do I set up the additional GPUs? A. You need to modify underclock.sh: https://github.com/createthis/linux_gpu_mining/blob/master/underclock.sh Just add more nvidia-settings and nvidia-smi lines for the additional GPUs.
Views: 94988 createthis
Real world Software Engineering is an iterative process and one of its main objectives is to get changes all of types - including new features, configuration changes, bug fixes and experiments into production and into the hands of the users, safely, quickly and in a sustainable way. Continuous Delivery (CD), a software engineering discipline, with its principled approach allows you to solve this exact problem. The core idea of CD is to create a repeatable, reliable and incrementally improving process for taking software from concept to the end user. Like software development, building real world machine learning (ML) algorithms is an also an iterative process with a similar objective - How do I get my ML algorithms into production and in the hands of the users in a safe, quick and sustainable way. The current process of building models, testing and deploying them into production is at best an ad-hoc process in most companies. At Indix, while building the Google of Products, we have had some good success in combining the best practices of continuous delivery in building our machine learning pipelines using open source tools and frameworks. The talk will not focus on the theory of ML or about choosing the right ML algorithm but specifically on the last mile problem of taking models to production and the lessons learned while applying the concept of CD to ML.. Here are some of the key questions that the talk with try to answer. ML Models Repository as analogous to Software Artifacts Repository - Similar to a software repository, what are the features of a Models Repository to aid traceability and reproducibility? Specifically, how do you manage models end to end - managing model metadata, visualization and lineage etc? ML Pipelines to orchestrate and visualize the end to end flow - A typical ML workflow has multiple stages. How do you model your entire workflow as a pipeline (similar to Build Pipeline in CD) to automate the entire process and help visualize the entire end to end flow? Model Quality Assurance - What quality gates and evaluation metrics, either manual and automated, should be used before exporting (promoting) models for serving in production? What happens when several different models are in play? How do you measure the models individually and then also in combination Serving Models in Production - How do you serve and scale these models in production? What happens when these models are heterogenous (built using different languages - Scala, Python etc.)? Regression Testing of Models - When exporting a new models, whats the best way to compare the performance of the newer model to the one already deployed on real-world (production) data? Maintenance and Monitoring of Models in production - Deploying models to production is only half the job done. How do you measure the performance of your model while its running in production?
Views: 405 FunctionalTV
A2V is a powerful add on of Microsoft® Visio® Professional. The Align 2 visualization tool manages the links between the shapes in the drawings, the business process repositories and the life systems. This framework provides excellent guidance to business analysts in charge of modeling. A2V provides for a business process repository that downloads and extends the Solution Manager repository and links with specific BPMN compliant Visio® shapes that allow precise modeling of the SAP® objects. The visual representation simplifies and enhances communication between the projects stakeholders from business analysts, key users and programmers to end users. Key elements of Align to Visualization: 1. Integration with SAP® Solution Manager and download of SolMan processes. 2. Own database repository of SAP® processes and process steps. 3. Integration and validation with ERP systems for authorizations and applicable transaction codes. 4. Adherence to BPMN standard.
Views: 338 Opteon
What is BUSINESS RULE MINING? What does BUSINESS RULE MINING mean? BUSINESS RULE MINING meaning - BUSINESS RULE MINING definition - BUSINESS RULE MINING explanation. Source: Wikipedia.org article, adapted under https://creativecommons.org/licenses/by-sa/3.0/ license. SUBSCRIBE to our Google Earth flights channel - https://www.youtube.com/channel/UC6UuCPh7GrXznZi0Hz2YQnQ Business rule mining is the process of extracting essential intellectual business logic in the form of Business Rules from packaged or Legacy software applications, recasting them in natural or formal language, and storing them in a source rule repository for further analysis or forward engineering. The goal is to capture these legacy business rules in a way that the business can validate, control and change them over time. Business rule mining supports a Business rules approach, which is defined as a formal way of managing and automating an organization's business rules so that the business behaves and evolves as its leaders intend. It is also commonly conducted as part of an application modernization project evolving legacy software applications to service oriented architecture (SOA) solutions, transitioning to packaged software, redeveloping new in-house applications, or to facilitate knowledge retention and communication between business and IT professionals in a maintenance environment. Alternative approaches to rule mining are manual and automated. A manual approach involves the hand-writing of rules on the basis of subject matter expert interviews and the inspection of source code, job flows, data structures and observed behavior. Manually extracting rules is complicated by the difficulty of locating and understanding highly interdependent logic that has been interwoven into millions of lines of software code. An automated approach utilizes repository-based software to locate logical connections inherent within applications and extract them into a predetermined business rules format. With automation, an effective approach is to apply semantic structures to existing applications. By overlaying business contexts onto legacy applications, rules miners can focus effort on discovering rules from systems that are valuable to the business. Effort is redirected away from mining commoditized or irrelevant applications. Further, best practices coupled with various tool-assisted techniques of capturing programs’ semantics speeds the transformation of technical rules to true business rules. Adding business semantics to the analysis process allows users to abstract technical concepts and descriptors that are normal in an application to a business level that is consumable by a rules analyst. System integrators, software vendors, rules mining practitioners, and in-house development teams have developed technologies, proprietary methodologies and industry-specific templates for application modernization and business rule mining.
Views: 80 The Audiopedia
Minalytix is a disruptive force in developing industry-leading mining and exploration software. The company specializes in cutting-edge software and techniques that are bringing new ideas to the mining industry. It offers expertise that integrates and maximizes existing software, analytics, and big data. When that software does not exist, they are also capable of building their own data collection tools to solve the problem.
Views: 136 NORCAT
Live demo of the AUTOGRAM grammar miner Knowledge about how a program processes its inputs can help to understand the structure of the input as well as the structure of the program. In a JSON value like [1,true,"Alice"], for instance the integer value 1, the boolean value true and the string value "Alice" would be handled by different functions or stored in different variables. Our AUTOGRAM tool uses dynamic tainting to trace the data flow of each input character for a set of sample inputs and identifies syntactical entities by grouping input fragments that are handled by the same functions. The resulting context-free grammar reflects the structure of valid inputs and can be used for reverse engineering of formats and can serve as direct input for test generators. For more details on AUTOGRAM, see its project page at https://www.st.cs.uni-saarland.de/models/autogram/
Views: 257 Andreas Zeller
Mobile data gathering with bounded relay in wireless sensor networks - IEEE PROJECTS 2018 Download projects @ www.micansinfotech.com WWW.SOFTWAREPROJECTSCODE.COM https://www.facebook.com/MICANSPROJECTS Call: +91 90036 28940 ; +91 94435 11725 IEEE PROJECTS, IEEE PROJECTS IN CHENNAI,IEEE PROJECTS IN PONDICHERRY.IEEE PROJECTS 2018,IEEE PAPERS,IEEE PROJECT CODE,FINAL YEAR PROJECTS,ENGINEERING PROJECTS,PHP PROJECTS,PYTHON PROJECTS,NS2 PROJECTS,JAVA PROJECTS,DOT NET PROJECTS,IEEE PROJECTS TAMBARAM,HADOOP PROJECTS,BIG DATA PROJECTS,Signal processing,circuits system for video technology,cybernetics system,information forensic and security,remote sensing,fuzzy and intelligent system,parallel and distributed system,biomedical and health informatics,medical image processing,CLOUD COMPUTING, NETWORK AND SERVICE MANAGEMENT,SOFTWARE ENGINEERING,DATA MINING,NETWORKING ,SECURE COMPUTING,CYBERSECURITY,MOBILE COMPUTING, NETWORK SECURITY,INTELLIGENT TRANSPORTATION SYSTEMS,NEURAL NETWORK,INFORMATION AND SECURITY SYSTEM,INFORMATION FORENSICS AND SECURITY,NETWORK,SOCIAL NETWORK,BIG DATA,CONSUMER ELECTRONICS,INDUSTRIAL ELECTRONICS,PARALLEL AND DISTRIBUTED SYSTEMS,COMPUTER-BASED MEDICAL SYSTEMS (CBMS),PATTERN ANALYSIS AND MACHINE INTELLIGENCE,SOFTWARE ENGINEERING,COMPUTER GRAPHICS, INFORMATION AND COMMUNICATION SYSTEM,SERVICES COMPUTING,INTERNET OF THINGS JOURNAL,MULTIMEDIA,WIRELESS COMMUNICATIONS,IMAGE PROCESSING,IEEE SYSTEMS JOURNAL,CYBER-PHYSICAL-SOCIAL COMPUTING AND NETWORKING,DIGITAL FORENSIC,DEPENDABLE AND SECURE COMPUTING,AI - MACHINE LEARNING (ML),AI - DEEP LEARNING ,AI - NATURAL LANGUAGE PROCESSING ( NLP ),AI - VISION (IMAGE PROCESSING),mca project SOFTWARE ENGINEERING,COMPUTER GRAPHICS 1. Reviving Sequential Program Birthmarking for Multithreaded Software Plagiarism Detection 2. EVA: Visual Analytics to Identify Fraudulent Events 3. Performance Specification and Evaluation with Unified Stochastic Probes and Fluid Analysis 4. Trustrace: Mining Software Repositories to Improve the Accuracy of Requirement Traceability Links 5. Amorphous Slicing of Extended Finite State Machines 6. Test Case-Aware Combinatorial Interaction Testing 7. Using Timed Automata for Modeling Distributed Systems with Clocks: Challenges and Solutions 8. EDZL Schedulability Analysis in Real-Time Multicore Scheduling 9. Ant Colony Optimization for Software Project Scheduling and Staffing with an Event-Based Scheduler 10. Locating Need-to-Externalize Constant Strings for Software Internationalization with Generalized String-Taint Analysis 11. Systematic Elaboration of Scalability Requirements through Goal-Obstacle Analysis 12. Centroidal Voronoi Tessellations- A New Approach to Random Testing 13. Ranking and Clustering Software Cost Estimation Models through a Multiple Comparisons Algorithm 14. Pair Programming and Software Defects--A Large, Industrial Case Study 15. Automated Behavioral Testing of Refactoring Engines 16. An Empirical Evaluation of Mutation Testing for Improving the Test Quality of Safety-Critical Software 17. Self-Management of Adaptable Component-Based Applications 18. Elaborating Requirements Using Model Checking and Inductive Learning 19. Resource Management for Complex, Dynamic Environments 20. Identifying and Summarizing Systematic Code Changes via Rule Inference 21. Generating Domain-Specific Visual Language Tools from Abstract Visual Specifications 22. Toward Comprehensible Software Fault Prediction Models Using Bayesian Network Classifiers 23. On Fault Representativeness of Software Fault Injection 24. A Decentralized Self-Adaptation Mechanism for Service-Based Applications in the Cloud 25. Coverage Estimation in Model Checking with Bitstate Hashing 26. Synthesizing Modal Transition Systems from Triggered Scenarios 27. Using Dependency Structures for Prioritization of Functional Test Suites INFORMATION AND COMMUNICATION SYSTEM 1. A Data Mining based Model for Detection of Fraudulent Behaviour in Water Consumption SERVICES COMPUTING 1. SVM-DT-Based Adaptive and Collaborative Intrusion Detection (jan 2018) 2. Cloud Workflow Scheduling With Deadlines And Time Slot Availability (March-April 1 2018) 3. Secure and Sustainable Load Balancing of Edge Data Centers in Fog Computing (17 May 2018) 4. Semantic-based Compound Keyword Search over Encrypted Cloud Data 5. Quality and Profit Assured Trusted Cloud Federation Formation: Game Theory Based Approach 6. Optimizing Autonomic Resources for the Management of Large Service-Based Business Processes
Views: 10 MICANS IEEE 2018-19 CSE PROJECTS PPT
This podcast is about Version Control Systems which is used to manage versions for files. REFERENCES 1.] Rodriguez‐Bustos, C. and Aponte, J., "How Distributed Version Control Systems impact open source software projects", in Mining Software Repositories (MSR), 2012 9th IEEE Working Conference, 2012, pp. 36 ‐ 39 2] de Alwis B., Saskatoon SK and Sillito J. , "Why Are Software Projects Moving From Centralized to Decentralized Version Control Systems?", in Cooperative and Human Aspects on Software Engineering, 2009. CHASE '09. ICSE Workshop, 2009, pp. 36 ‐39 Books: 1.Michael Pilato, Version Control with Subversion, O'Reilly & Associates, Inc. Sebastopol,CA, USA, 2004 2.Scott Chacon, Pro Git, Apress Berkely, CA, USA, 2009. Web: http://www.ohloh.net/
Views: 260 Dhanraj Jadhav
PyData London 2016 Data from smartphone sensors can be used to learn from and analyse our daily behaviours. In this talk, I'll discuss processing and learning from sensor data with Python. I'll focus on accelerometers - a triaxial sensor that measures motion - starting with an overview pre-processing the data and ending with supervised and unsupervised learning applications and visualisations. Our smartphones are increasingly being built with sensors, that can measure everything from where we are (GPS, Wi-Fi) to how we move (accelerometers) and other aspects of our environments (e.g., temperature, humidity). Many apps are now being designed to collect and leverage this data, in order to provide interesting context-aware services and quantify our daily routines. In this talk, I'll give an overview of collecting sensor data from an Android app and processing the data with Python. I'll focus on accelerometers - a triaxial sensor that measures the device's motion - which is now being used in apps that detect what you are doing (cycling, running, riding a train); if we have enough time I'll also briefly cover a similar example with Wi-Fi/location data. Using an open-sourced Android app and iPython notebook, I'll discuss the following questions: What does the raw data look like? There are a number of trade-offs when collecting sensor data: most notably, data collection needs to be balanced against battery consumption. Plotting the raw data gives a view of how the data was sampled and how it changes across activities. How can I pre-process and extract features from this data? Three kinds of features can be extracted from acceleromter data: statistical, time-series, and signal-based. Most of these are readily available in well-known Python libraries (scipy, numpy, statsmodels). How can these features be used to analyse behaviours? I'll show an example of using accelerometer data to cluster users into groups, based on how active they are. How can these features be used to detect behaviours? I'll show an example of training a supervised learning algorithm (using scikit-learn) to detect walking vs. running vs. standing. I'll close by discussing how these techniques are being applied in novel smartphone apps for health monitoring. GitHub Repo: https://github.com/nlathia/pydata_2016
Views: 3520 PyData
This video is an introduction of KNIME. KNIME is an open source platform for data analysis, predictive analytics and modeling. It is not based on a script language rather it has a graphical interface. This video shows the basic functions KNIME in terms of the process of reading, manipulating, visualizing and analyzing data.
Views: 108595 KNIMETV
Scheduling Algorithm and GUI created for WEKA Bi-directional dataset for WEKA and 4D Simulation in a Virtual Environment Abstract — 4D scheduling and scheduling software databases/repositories in general are being utilized by many industries. The question of which algorithms to use for specific data mining solutions with regards to scheduling datasets and how much does the solution cost is a growing concern. In this paper the scheduling dataset created by the authors is a small example of a construction scheduling database by utilizing the WEKA open-source software. 4D simulations and the scheduling databases are very expensive when outsourcing to a proprietor like Microsoft Project and VICO Control vs. utilizing an open source free solution like WEKA. The authors objective is to show how to prepare and process 4D scheduling data for free and obtain similar results to those of an expensive proprietary software. Our goal is for the reader to attain new knowledge on how to create their own bi-directional dataset and visualization tool for free by utilizing open source application development kits.
Views: 128 Dr. Shawn O'Keeffe
Hi Today, I will shows how to download datasets from UCI dataset and prepare data Let GO 1. Go to web site UCI dataset https://archive.ics.uci.edu/ml/datasets.html 2. Choose the dataset, iris dataset 3. Click Data Folder 4. Click iris.data 5. Copy all text 6. Paste to Notepad++ 7. Replace following Iris-setosa 1,-1,-1 Iris-versicolor -1,1,-1 Iris-virginica -1,-1,1 Thank you ^^
Views: 3551 COMSCI Channel
The Data Conservancy was introduced to Data Rescue Boulder through our long-time partner Ruth Duerr of Ronin Institute. Through our conversations, we recognized that Data Rescue Boulder has a need to process large number of rescued data sets and store them in more permanent homes. We also recognized that Data Conservancy along with Open Science Framework have the software infrastructure to support such activities and bring a selective subset of the rescued data into our own institutional repository. We chose the subset of data based on a selection from one of the Johns Hopkins University faculty members. This video shows how a Fedora-based institutional repository can be leveraged to provide preservation services for content stored within an OSF project. JHU has developed tools for exporting the business objects from the OSF as archival packages. When ingested into our institutional repository and represented as RDF linked data, repository services can reason about the content within OSF projects in support of preservation, data mining, and other value-add activities.
Views: 114 Data Conservancy
Dec. 17, 2013: The Incremental Commitment Spiral Model (ICSM): Principles and Practices for Successful Systems and Software with ACM Fellow Dr. Barry Boehm, TRW Professor in the USC Computer Sciences and Industrial and Systems Engineering Departments. Moderated by Dr. Boehm's former doctoral student LiGuo Huang, Associate Professor of Computer Science and Engineering at Southern Methodist University. The Incremental Commitment Spiral Model (ICSM) extends the scope of the original spiral model for software development to cover the definition, development, and evolution of cyber-physical-human systems. It has been successfully applied to systems ranging from small e-services applications to complex cyber-physical-human systems of systems. It is not a one-size-fits-all process model, but uses four essential principles to determine whether, where, and when to use candidate common-case process elements (reuse-based, prototype-based, agile, architected agile, plan-driven, product-line, systems of systems, legacy-based, etc.). The four essential principles are (1) Stakeholder value-based system evolution; (2) Incremental commitment and accountability; (3) Concurrent multi-discipline engineering; and (4) Evidence and risk-based decisions. This presentation covers the four essential principles and their rationale; spiral, phased, concurrency, and process-element-decision process views; associated tools such as an Electronic Process Guide and the Winbook stakeholder win-win requirements negotiation system; and examples of successful ICSM use and pitfalls to avoid. (Based on a book co-authored by Barry Boehm, Jo Ann Lane, Supannika Koolmanojwong, and Richard Turner.) Duration: 60 minutes Presenter: Barry Boehm, University of Southern California Dr. Barry Boehm is the TRW Professor in the USC Computer Sciences and Industrial and Systems Engineering Departments. He is also the Chief Scientist of the DoD-Stevens-USC Systems Engineering Research Center, and the founding Director of the USC Center for Systems and Software Engineering. He was director of DARPA-ISTO 1989-92, at TRW 1973-89, at Rand Corporation 1959-73, and at General Dynamics 1955-59. His contributions include the COCOMO family of cost models and the Spiral family of process models. He is a Fellow of the primary professional societies in computing (ACM), aerospace (AIAA), electronics (IEEE), and systems engineering (INCOSE), and a member of the U.S. National Academy of Engineering. Moderator: LiGuo Huang, Southern Methodist University Dr. LiGuo Huang is an associate professor in the Computer Science and Engineering Department (CSE) at the Southern Methodist University (SMU). She received both her Ph.D. (2006) and M.S. from the Computer Science Department and Center for Systems and Software Engineering (CSSE) at the University of Southern California (USC). After her Ph.D., she joined SMU CSE as the Assistant Professor in 2007. Her current research centers around mining systems and software engineering repository, software process modeling, simulation and improvement, software quality and information dependability assurance, value-based software engineering, and empirical software engineering. Her research is supported by NSF, the U.S. Department of Defense, NSA, and industry. She had been intensively involved in initiating the research on stakeholder/value-based integration of systems and software engineering and published in ICSE, ASE, IEEE Computer and IEEE Software. She has been the reviewer for TSE, TR, JSS, JSEP, IST, IJSI and the program committee member for a number of international software engineering conferences and workshops. She served as the Program Committee Chair of ICSSP 2012, CSEE&T 2012, and the Asian Chair of CSEE&T 2011. She is the member of CSEE&T Steering Committee and the Program Committee Chair of ICSSP 2014.
Views: 2670 Association for Computing Machinery (ACM)
FSU Libraries' Academic Publishing Team works with members of the FSU community to curate, archive, and provide access to a diverse range of materials related to the missions of scholarship, research, and education at FSU. The repository showcases the work of individuals, departments, institutes, colleges, and other communities of researchers on campus.
Views: 577 FSU Libraries
A quick tutorial on how to share the data files you create as a data repository so BattleScribe users can download them and create army lists. Please note, when hosting your repository on your own website, you need to upload the entire repository folder and it's contents together! I don't recommend manually changing the contents of the folder in any way (except re-creating it with the Data Indexer). Dropbox: http://www.dropbox.com Enable the Public folder: http://www.dropbox.com/help/16/en
Views: 7485 BattleScribe
OSCAR produces semantic annotation of chemistry documents. It uses natural-language processing to identify terms related to chemistry, which allows fast and efficient extraction of chemistry information. This video provides an overview of OSCAR. It shows the software being used to search PubMed abstracts, process them and then browse the results. http://www.omii.ac.uk/wiki/OSCAR
Views: 248 omiiuk
Lightning talk at FOSDEM 2018. The talk explains how to analyze software development with GrimoireLab. It will show with simple code how easy it is to retrieve data from git, GitHub, and many other kinds of repositories. Then, with the same toolkit, the data will be organized in ElasticSearch indexes, visualized in actionable dashboards, Many free / open source software (FOSS) projects feature an open development model, with public software development repositories which anyone can browse. These repositories are normally used to find specific information, such a certain commit or a particular bug report. But they can also be mined to extract all relevant data, so that it can be analyzed to learn about any aspect of the project. This talk will explain the GrimoireLab method for doing that, which is based on organizing all that information in a database, which can be later analyzed. This approach allows for minimal impact on the project infrastructure, since data is retrieved only once, even if it later analyzed many times. It allows as well for efficiency and comfort when mining data for an analysis, since the results are readily available, databases can be shared and replicated at will, and queried them with any kind of tools is easy. The tools that retrieve information from the repositories are grouped in the GrimoireLab toolset. It includes mature, widely tested programs capable of extracting information from most repositories used by FOSS projects of any scale. Many of them are agnostic with respect to the database used, although currently ElasticSearch is the best supported. The produced databases can be exploited in several ways, of which two will be explained during the talk: using Python/Pandas to produce IPython/Jupyter Notebooks which analyze some aspect of the project; and using Python to feed a ElasticSearch cluster, with a Kibana front-end for visualizing in a flexible, powerful dashboard. The talk will explain the whole process from data retrieval to visualization. Some of the contents of the talk are described in detail in the online book GrimoireLab Tutorial.
Views: 64 GrimoireLab
Part 1 in a in-depth hands-on tutorial introducing the viewer to Data Science with R programming. The video provides end-to-end data science training, including data exploration, data wrangling, data analysis, data visualization, feature engineering, and machine learning. All source code from videos are available from GitHub. NOTE - The data for the competition has changed since this video series was started. You can find the applicable .CSVs in the GitHub repo. Blog: http://daveondata.com GitHub: https://github.com/EasyD/IntroToDataScience I do Data Science training as a Bootcamp: https://goo.gl/OhIHSc
Views: 908157 David Langer