Tesis Doctorales
Examinar
Examinando Tesis Doctorales por Materia "1203.04 Inteligencia Artificial"
Mostrando 1 - 9 de 9
- Resultados por página
- Opciones de ordenación
Ítem Una arquitectura distribuida para la detección, comunicación y mitigación de la denegación de servicio(Universidad Rey Juan Carlos, 2014) Campo Giralte, LuisComo bien se sabe, en la actualidad los ciberataques tienen lugar en cualquier lugar del mundo y afectan a todo tipo de infraestructuras. Todos los meses se reciben noticias de ciberataques que se llevan a cabo contra grandes empresas, gobiernos, universidades, comercios de Internet, entre otros, y ninguna de las soluciones comerciales disponibles parece capaz de ofrecer una solución al gran problema de la denegación de servicio. Es por ello que, con el objeto de ofrecer una adecuada protección frente a dichas intrusiones, resulta necesario aportar soluciones configurables, integrables que, al mismo tiempo, sean rápidas y eficaces. En este sentido, la tesis que se expone a continuación presenta y desarrolla una solución de arquitectura distribuida para la detección, comunicación y mitigación de la denegación del servicio, basada en aplicaciones software híbridas desarrolladas en un núcleo en lenguaje C, así como en la flexibilidad que aporta Python, empleando para ello interfaces externas en dicho lenguaje. En esta tesis se exploran soluciones a la denegación de servicio a nivel de aplicación. Para ello, se han estudiado e investigado técnicas basadas en redes neuronales, en análisis estadísticos a nivel IP, en correlación de flags TCP, en el estudio de comportamientos del usuario sobre los recursos del servidor, soluciones de control de flujo y en soluciones distribuidas, entre otras. El resultado de las técnicas estudiadas ha dado lugar a una solución distribuida, escalable y de bajo coste, dependiente del tipo de infraestructura que se quiere proteger. La propuesta considera una arquitectura distribuida basada en tres pilares fundamentales, detección, comunicación y mitigación de la denegación de servicio. Dichos elementos, cooperan entre sí de manera que, tanto en datos reales y simulados, conseguimos detectar, mitigar y comunicar este tipo de ciberataques. Para la detección proponemos el empleo de un algoritmo que aporta elevadas tasas de detección en el protocolo HTTP. Por otra parte, mediante el uso de canales encubiertos conseguimos comunicar los diferentes elementos de la arquitectura propuesta de forma prácticamente invisible. Finalmente, conseguimos mitigar dichas intrusiones de manera que los usuarios habituales del servicio no experimenten ninguna alteración en sus comunicaciones. Todo lo que aquí se expone contiene código fuente con licencia GPL publicado en Internet y puede ser descargado y utilizado por otros investigadores si así fuese necesario.Ítem Coordination Mechanisms for Agent-Based Smart Grids(Universidad Rey Juan Carlos, 2013) Mihailescu, Radu-CasianGovernments around the globe are heavily investing in upgrading the ageing infrastructure of the electricity grid. The imperative for this is driven primarily by regulatory requirements and the high cost of inefficiently delivering energy. The infrastructure is continuously improving, as more and more smart meters are installed, coupled with the proliferation of controllable loads and distributed generation. However, network operators, utilities, as well as end-consumers and small-scale producers are struggling to extract value from such systems and are exploring new ways for optimizing the performance of their deployed assets. This thesis introduces a multiagent approach for modelling the emerging complexity of the energy industry. The multiagent system paradigm is an ideal candidate for delivering a framework that captures the inherent distributed and dynamic nature of smart grids. While the traditionally centralized management of the system becomes less viable in the context of distributed generation and controllable loads, the underlying thread of this thesis advocates the design and implementation of coordination mechanisms capable to integrate and manage a large-scale integration of such devices via agent-based control. We begin by proposing dynamic micro-grids, a new conceptual organization of the network, adequate to integrate today¿s traditional users into an interactive, internetlike system, in the sense that power flow will become bidirectional and energy management will become distributed in the grid due to the many actors involved in the operation of the system. The mechanisms proposed for micro-grid formation are oriented towards producing sub-systems of the grid that are exhibiting reduced transmission losses and an efficient utilization of renewables, as well as endowing the system with self-adaptation techniques for coping with dynamic environments. We further aim to enhance the operation of the micro-grid formations by mainly focusing on two aspects. On one hand (supply-side) we are concerned with seamlessly integrating distributed generation to ensure a reliable service of energy supply comparable to what a large power plant delivers today. We first address the economic benefits of virtual power plants in a game-like setting and then go on to propose a DCOP-based formalism for solving the schedule generation problem, while accounting for the stochastic behavior of intermittent supply. On the other hand (consumer-side), we apply the use of game mechanics to drive the behaviour of prosumers towards efficient grid-wise use of energy. In order to cope with the challenges faced by current electricity networks, we propose a game layer on top of the electricity grid infrastructure and the use coordination mechanisms as a catalyst for change, encouraging participation of prosumers in the energy field towards reduced costs, lower carbon generation and increased grid resilience in the form of demand response and demandside management solutions. Finally, we propose a collusion detection mechanism that complements the above-mentioned solutions in the sense of inspecting for patterns where agents tacitly cooperate through illicit monopoly tactics to manipulate energy markets.Ítem Gaming with Emotions: An Architecture for the Development of Mood-Driven Characters in Video Games(Universidad Rey Juan Carlos, 2013) Peña Sánchez, LuisIn the present dissertation, we study the emotional component of the behavior of artificial characters in video games. Our primary aim is to improve the video game playing experience by increasing the sense of realism in gaming scenarios. For this purpose, we develop an emotion simulation model called EEP that accounts for the impact of external events on a character¿s mood state, and analyze its relevance for the development of mood-driven behaviors as part of the control strategies of artificial characters. In addition, we provide a mechanism that improves the development procedure of video game characters, by developing a new hybrid machine learning model called WEREWoLF that purposefully combines reinforcement learning and evolutionary techniques, so as to automatically generate character control strategies associated to different mood states. Both models are integrated into the AGCBAR architecture, which constitutes the solution proposed in this dissertation to the problem of efficiently designing mood-driven strategies for artificial characters in video games. The AGCBAR architecture is capable of encompassing a broad variety of game engine cores, and is thus applicable to a wide spectrum of video games. We assess the adequacy of the above architecture and its components in different ways. While the EEP model has been evaluated on the basis of the judgment of expert gamers, the WEREWoLF algorithm has undergone a quantitative evaluation in a video game scenario. Finally, we implement the complete architecture together with an experimental video game framework in a complex case study, comparing the development effort of mood-driven artificial characters using the AGCBAR architecture together with EEP and WEREWoLF, to traditional implementation techniques.Ítem Heuristic strategies for different variants of the Order Batching Problem(Universidad Rey Juan Carlos, 2017) Menéndez Moreno, BorjaOptimization is a discipline that can be seen as a cornerstone of other areas, such as Artificial Intelligence, Computer Science, or Operations Research, among others. Optimization aims to find feasible, high-quality solutions to real-life problems. It has applications in engineering, medicine, economy, logistics and many other fields. Since there exist many optimization problems with practical interest, efficient techniques to address them are necessary. A possible classification of the current approaches can distinguish between exact and approximate methods. Exact methods are able to obtain an optimal solution to a certain problem, but they usually require a large execution time; thus, they are impracticable when the size of the problem is sufficiently large, as it commonly occurs in real-life problems. Within the family of approximate techniques, heuristic algorithms are able to find high-quality solutions in a reasonable amount of time; however, they can not guarantee the optimality of the solution found, neither how far is the provided solution to the optimum one. The Order Batching Problem (OBP) is an NP-hard optimization problem, whose objective is to minimize the total retrieving time of a set of orders received in a warehouse. In order to achieve that, the main strategy is to group orders into batches, so that orders from the same batch are retrieved in the same travel. There also exist different variants to the original problem in which different objective functions are considered. For instance, the minimization of the maximum retrieving time of each batch, the minimization of the tardiness when orders has a certain due date, the minimization of the total retrieving time when orders are received in an online way, etc. In this Doctoral Thesis we propose several heuristic algorithms to address problems related to the OBP. All the proposed algorithms make use of the Variable Neighborhood Search (VNS) methodology in some of its most usual variants (Basic VNS or General VNS), in a parallel implementation, or embedded in a multi-start strategy. These algorithms have been tested in different variants of the OBP over several reference sets of instances. The obtained results improve the current state of the art in all the OBP variants tackled.Ítem Interoperabilidad entre sistemas de apoyo a la conducción de operaciones militares(Universidad Rey Juan Carlos, 2011-10) Serna Cañas, MiguelÍtem Probabilidad, Redes Neuronales e Inteligencia Artificial en Composición Musical. Desarrollo de los Sistemas MusicProb y MusicNeural(Universidad Rey Juan Carlos, 2012) Padilla Martín-Caro, VíctorÍtem El problema de la minimización de la anchura de corte en ordenaciones lineales: Resolución exacta y heurística(Universidad Rey Juan Carlos, 2011) García Pardo, EduardoLa optimización es una disciplina que trata de encontrar solución a problemas de la vida cotidiana. Una gran cantidad de problemas que tienen interés en áreas científicas y tecnológicas pueden ser enunciados como problemas de optimización. Un problema de optimización es aquél en el que, habiendo muchas posibles soluciones y alguna forma clara de comparación entre ellas, se pretende encontrar la mejor de todas, es decir, la solución óptima. Los problemas de Optimización Combinatoria son un tipo de problemas de optimización cuyas soluciones están formadas por números enteros. Existen numerosos problemas de Optimización Combinatoria para los que no se conocen algoritmos capaces de resolverlos en tiempo polinómico. No obstante, dado el interés práctico de muchos de ellos, es necesario disponer de técnicas eficientes para abordarlos. Las técnicas existentes en la actualidad se podrían clasificar en exactas y aproximadas. Las técnicas exactas son capaces de encontrar la solución óptima, pero requieren un tiempo de cómputo elevado, por lo que son inviables cuando el tamaño del problema es grande. De entre las técnicas aproximadas destacan los algoritmos heurísticos, capaces de encontrar soluciones de alta calidad en un tiempo de cómputo razonable, pese a no poder certificar si la solución encontrada es óptima, ni cómo de lejos está de ella. El problema de la minimización de la anchura de corte en ordenaciones lineales es un problema NP-Difícil de Optimización Combinatoria y consiste en encontrar un etiquetado de los vértices de un grafo, de modo que se minimice el máximo número de aristas que sobrepasan el espacio entre cada dos vértices consecutivos, al ordenar los vértices del grafo sobre una línea recta. Este problema tiene aplicaciones en diseño de circuitos, migración y fiabilidad de redes de telecomunicaciones, representación automática de grafos y en recuperación de información. En esta Tesis Doctoral se proponen algoritmos exactos y heurísticos para la resolución del problema de la minimización de la anchura de corte en ordenaciones lineales. Los algoritmos exactos propuestos están basados en la técnica de Ramificación y Acotación, para la que se proponen distintas cotas inferiores y varios recorridos del árbol de exploración. Respecto a los algoritmos heurísticos, se proponen heurísticas constructivas, de mejora y de combinación de soluciones, que son embebidas dentro de un esquema híbrido GRASP con Búsqueda Dispersa. Tanto los algoritmos exactos, como los algoritmos heurísticos, mejoran los resultados obtenidos por los métodos existentes en el estado del arte, sobre los conjuntos de instancias evaluados.Ítem Selección de factores económico-financieros determinantes del éxito de las empresas en los mercados internacionales mediante técnicas de Inteligencia Artificial(Universidad Rey Juan Carlos, 2011) Miranda García, I. MartaEl "nuevo orden económico" mundial está caracterizado por la presencia e interrelación de procesos tan relevantes como la revolución en las tecnologías de la información, la creciente, y parece que imparable globalización económica, y el proceso de reestructuración del capitalismo. Todos ellos son a la vez causa y efecto de numerosos cambios que se están produciendo y que han dado lugar a una economía mundial caracterizada por la facilidad de los movimientos del capital, y por tanto una internacionalización y "desterritorialización" (EMBONG, 2000) del mismo, una nueva división internacional del trabajo, una creciente interdependencia entre países y empresas y un crecimiento del comercio internacional jamás visto hasta ahora. Dadas estas características son las empresas las que deben competir en un entorno cambiante e incierto independientemente de la nacionalidad del capital, de sus consumidores o del país de procedencia de los competidores...Ítem Variational and Deep Learning Methods in Computer Vision(Universidad Rey Juan Carlos, 2019) Ramírez Díaz, IvánComputer Vision is a field that aims to simulate the human visual system. In the last decade, with the continuous emergence of multimedia data and applications, there has been an increasing interest to exploit all this available information, which mainly consists in images and videos. Classic approaches to Computer Vision problems constitute a Bag of Tricks that have been useful for many years. With the irruption of Deep Learning, most of these techniques, all of a sudden, became old. The reasons are the impressive outperforming results of Deep Learning techniques that, taking advantage of the available data, provide an end-to-end solution that is nowadays easy to use, even for non experts users. Surprisingly, some Variational Methods, which can be considered as classical methods in Computer Vision, survived and maintained the state of the art leading in some specific tasks: Medical Imaging Registration for instance. The impact of Deep Learning applications in society is undeniable. Moreover, the profit of automatizing many processes and tedious tasks that are still nowadays realized by humans, should be taken as good news, since it would provide more free time for people... and thus, more time to live. The dark side of such automatization will rely on how this new techniques, framed in the Artificial Intelligence field, are democratized across society. This is, how useful in practice are those new emerging tools and who has access to them. Autonomous driving, Medical Imaging, earthquakes and pollution predictions are few examples of critical application fields where being inaccurate implies disastrous consequences. In such scenarios, classic approaches in Computer Vision, provides less uncertainty in outcomes. In this sense, classical methods are more robust, in particular Variational Methods which have a deep and strong mathematical foundations. Moreover, recently, adversarial attacks on Neural Networks have shown how easy is to fool Deep Learning systems, increasing skepticism for potential Deep Learning users as Medical experts. In this thesis we address Computer Vision problems in real scenarios from two perspectives with the usage of: (1) Variational Methods and (2) Deep Learning techniques. The former is a powerful tool that gives an extraordinary control over the expected outcomes with very accurate results if some hyperparameterization is carried out properly. However, this required (usually manual) hyper-parameterization constitutes a huge shortcoming in practice, and a limitation for a wide use by non experts. The later relies mainly on data and solves, until a certain point, an high-dimensional interpolation problem with astonishing results that, however, are sometimes unpredictable (and thus dangerous) when unseen data from different distribution is tested (extrapolation).