Data-driven Generation of Policies (SpringerBriefs in Computer Science)

Data-driven Generation of Policies (SpringerBriefs in Computer Science)

Amy Sliva

Language: English

Pages: 50

ISBN: 1493902733

Format: PDF / Kindle (mobi) / ePub

This Springer Brief presents a basic algorithm that provides a correct solution to finding an optimal state change attempt, as well as an enhanced algorithm that is built on top of the well-known trie data structure. It explores correctness and algorithmic complexity results for both algorithms and experiments comparing their performance on both real-world and synthetic data. Topics addressed include optimal state change attempts, state change effectiveness, different kind of effect estimators, planning under uncertainty and experimental evaluation. These topics will help researchers analyze tabular data, even if the data contains states (of the world) and events (taken by an agent) whose effects are not well understood. Event DBs are omnipresent in the social sciences and may include diverse scenarios from political events and the state of a country to education-related actions and their effects on a school system. With a wide range of applications in computer science and the social sciences, the information in this Springer Brief is valuable for professionals and researchers dealing with tabular data, artificial intelligence and data mining. The applications are also useful for advanced-level students of computer science.

PostgreSQL: Up and Running

Functional Programming in Scala

Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science (2nd Edition) (Computer Science and Scientific Computing)

Introduction to the Theory of Computation (3rd Edition)












Extend this algorithm to solve all the problems posed in the last section. This algorithm works by first enumerating each possible state change attempt with size at most h, then choosing the one which solves the appropriate problem. As discussed in the proof of Theorem 2.1, since there are only O.jAjh / such state change attempts, this algorithm runs in PTIME with respect to the number of action attributes jAj. The algorithm for enumerating state change attempts of size at most h along with their.

Size 20 and teacher salary $60,000. Example 3.4. We can also look at how a data ratio effect estimator would operate on the city government database. Suppose we only have the columns Funding for Police Department, Funding for Street Lighting, and Petty Crimes, where the two former are action attributes and the latter is a stat attribute. In this case, we may want to determine what fraction of the time the incidence of petty crimes is above 125 occurrences when funding for the police department is.

The running times of DSEE_OSCA and TOSCA (over synthetic data) as the number of action attributes increases and number of tuples is fixed at 8;000 44 5 Experimental Evaluation DSEE_OSCA vs. TOSCA: Varying Action Attb Domain Size (Synthetic Data) 35 TOSCA Computation Time (seconds) 30 DSEE_OSCA 25 20 15 10 5 0 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Action Attribute Domain Size Fig. 5.9 Performance of the naive algorithm DSEE_OSCA (Algorithm 2) versus the TOSCA.

Generation process. 2.1 Effect Estimators The goal in this work is to allow an end user to take an event KB K and a goal G (some desired outcome condition on state attributes) that the user wants to achieve and find an SCA that “optimally” achieves goal G in accordance with some objective function (such as maximizing the probability of goal G being achieved while minimizing cost). We assume without loss of generality that all goals are expressed as standard conjunctive selection conditions [3].

By the Comisión de Investigaciones Cientificas de la Provincia de Buenos Aires (Argentina) to fund his studies towards a Master’s degree in Computer Science under Simon Parsons (Brooklyn College, City University of New York, USA) as a member of the Artificial Intelligence Research and Development Laboratory (LIDIA) at Universidad Nacional del Sur. In 2005, he started his Ph.D. studies at University of Maryland College Park (USA) under V. S. Subrahmanian. During his time in Maryland, he was a.

Download sample