Today I finished reading “Groo and Rufferto” by Sergio Aragones
Read – Biologically Inspired Artificial Intelligence for Computer Games
Today I finished reading “Biologically Inspired Artificial Intelligence for Computer Games” by Darryl Charles
Listening – Lovetune For Vacuum
This week I am listening to “Lovetune For Vacuum” by Austria Soap&Skin
Read – jQuery Cookbook
Today I finished reading “jQuery Cookbook: Solutions & Examples for jQuery Developers” by Cody Lindley
Studying – Drawing on the right side of the brain
This month I am studying “Drawing on the right side of the brain”
I’ve tried sitting down and working through the book and the video twice and just never finished it.
There’s a month long class at the local community centre I am going to take and this time, I will finish it.
Paper – The Case for Modeling Security, Privacy, Usability and Reliability (SPUR) in Automotive Software
Today I read a paper titled “The Case for Modeling Security, Privacy, Usability and Reliability (SPUR) in Automotive Software”
The abstract is:
Over the past five years, there has been considerable growth and established value in the practice of modeling automotive software requirements.
Much of this growth has been centered on requirements of software associated with the established functional areas of an automobile, such as those associated with powertrain, chassis, body, safety and infotainment.
This paper makes a case for modeling four additional attributes that are increasingly important as vehicles become information conduits: security, privacy, usability, and reliability.
These four attributes are important in creating specifications for embedded in-vehicle automotive software.
Listening – Swoon
This week I am listening to “Swoon” by Silversun Pickups
Paper – First Principle Approach to Modeling of Small Scale Helicopter
Today I read a paper titled “First Principle Approach to Modeling of Small Scale Helicopter”
The abstract is:
The establishment of global helicopter linear model is very precious and useful for the design of the linear control laws, since it is never afforded in the published literatures.
In the first principle approach, the mathematical model was developed using basic helicopter theory accounting for particular characteristic of the miniature helicopter.
No formal system identification procedures are required for the proposed model structure.
The relevant published literatures however did not present the linear models required for the design of linear control laws.
The paper presents a step by step development of linear model for small scale helicopter based on first-principle approach.
Beyond the previous work in literatures, the calculation of the stability derivatives is presented in detail.
A computer program is used to solve the equilibrium conditions and then calculate the change in aerodynamics forces and moments due to the change in each degree of freedom and control input.
The detail derivation allows the comprehensive analysis of relative dominance of vehicle states and input variables to force and moment components.
Hence it facilitates the development of minimum complexity small scale helicopter dynamics model.
Paper – Recognizing Members of the Tournament Equilibrium Set is NP-hard
Today I read a paper titled “Recognizing Members of the Tournament Equilibrium Set is NP-hard”
The abstract is:
A recurring theme in the mathematical social sciences is how to select the “most desirable” elements given a binary dominance relation on a set of alternatives.
Schwartz’s tournament equilibrium set (TEQ) ranks among the most intriguing, but also among the most enigmatic, tournament solutions that have been proposed so far in this context.
Due to its unwieldy recursive definition, little is known about TEQ.
In particular, its monotonicity remains an open problem up to date.
Yet, if TEQ were to satisfy monotonicity, it would be a very attractive tournament solution concept refining both the Banks set and Dutta’s minimal covering set.
We show that the problem of deciding whether a given alternative is contained in TEQ is NP-hard.
Read – The Adventure of the Second Stain
Today I finished reading “The Adventure of the Second Stain” by Arthur Conan Doyle
Paper – Linear Time Recognition Algorithms for Topological Invariants in 3D
Today I read a paper titled “Linear Time Recognition Algorithms for Topological Invariants in 3D”
The abstract is:
In this paper, we design linear time algorithms to recognize and determine topological invariants such as the genus and homology groups in 3D.
These properties can be used to identify patterns in 3D image recognition.
This has tremendous amount of applications in 3D medical image analysis.
Our method is based on cubical images with direct adjacency, also called (6,26)-connectivity images in discrete geometry.
According to the fact that there are only six types of local surface points in 3D and a discrete version of the well-known Gauss-Bonnett Theorem in differential geometry, we first determine the genus of a closed 2D-connected component (a closed digital surface).
Then, we use Alexander duality to obtain the homology groups of a 3D object in 3D space.
Listening – Day & Age
This week I am listening to “Day & Age” by The Killers
Paper – Topological robotics: motion planning in projective spaces
Today I read a paper titled “Topological robotics: motion planning in projective spaces”
The abstract is:
We study an elementary problem of topological robotics: rotation of a line, which is fixed by a revolving joint at a base point: one wants to bring the line from its initial position to a final position by a continuous motion in the space.
The final goal is to construct an algorithm which will perform this task once the initial and final positions are given.
Any such motion planning algorithm will have instabilities, which are caused by topological reasons.
A general approach to study instabilities of robot motion was suggested recently by the first named author.
With any path-connected topological space X one associates a number TC(X), called the topological complexity of X.
This number is of fundamental importance for the motion planning problem: TC(X) determines character of instabilities which have all motion planning algorithms in X.
In the present paper we study the topological complexity of real projective spaces.
In particular we compute TC(RP^n) for all n<24.
Our main result is that (for n distinct from 1, 3, 7) the problem of calculating of TC(RP^n) is equivalent to finding the smallest k such that RP^n can be immersed into the Euclidean space R^{k-1}.
Paper – A new Contrast Based Image Fusion using Wavelet Packets
Today I read a paper titled “A new Contrast Based Image Fusion using Wavelet Packets”
The abstract is:
Image Fusion, a technique which combines complimentary information from different images of the same scene so that the fused image is more suitable for segmentation, feature extraction, object recognition and Human Visual System.
In this paper, a simple yet efficient algorithm is presented based on contrast using wavelet packet decomposition.
First, all the source images are decomposed into low and high frequency sub-bands and then fusion of high frequency sub-bands is done by the means of Directive Contrast.
Now, inverse wavelet packet transform is performed to reconstruct the fused image.
The performance of the algorithm is carried out by the comparison made between proposed and existing algorithm.
Listening – Evil Urges
This week I am listening to “Evil Urges” by My Morning Jacket
Read – Ringworld’s Children
Today I finished reading “Ringworld’s Children” by Larry Niven
Paper – Improvements of the 3D images captured with Time-of-Flight cameras
Today I read a paper titled “Improvements of the 3D images captured with Time-of-Flight cameras”
The abstract is:
3D Time-of-Flight camera’s images are affected by errors due to the diffuse (indirect) light and to the flare light.
The presented method improves the 3D image reducing the distance’s errors to dark surface objects.
This is achieved by placing one or two contrast tags in the scene at different distances from the ToF camera.
The white and black parts of the tags are situated at the same distance to the camera but the distances measured by the camera are different.
This difference is used to compute a correction vector.
The distance to black surfaces is corrected by subtracting this vector from the captured vector image.
Read – The One Minute Entrepreneur
Today I finished reading “The One Minute Entrepreneur: The Secret to Creating and Sustaining a Successful Business” by Kenneth Blanchard
Listening – Narrow Stairs
This week I am listening to “Narrow Stairs” by Death Cab For Cutie
Studying – Photo restoration with Photoshop
This month I am studying “Photo restoration with Photoshop”
Listening – You & Me
This week I am listening to “You & Me” by The Walkmen
Opine-ated
It is so very easy these days to have an opinion on something.
I note that it is still quite difficult, though, to form one (an opinion) for yourself.
Paper – Scalable Algorithms for Aggregating Disparate Forecasts of Probability
Today I read a paper titled “Scalable Algorithms for Aggregating Disparate Forecasts of Probability”
The abstract is:
In this paper, computational aspects of the panel aggregation problem are addressed.
Motivated primarily by applications of risk assessment, an algorithm is developed for aggregating large corpora of internally incoherent probability assessments.
The algorithm is characterized by a provable performance guarantee, and is demonstrated to be orders of magnitude faster than existing tools when tested on several real-world data-sets.
In addition, unexpected connections between research in risk assessment and wireless sensor networks are exposed, as several key ideas are illustrated to be useful in both fields.
Read – Japanese Demystified
Today I finished reading “Japanese Demystified” by Eriko Sato
Listening – Somewhere At The Bottom Of The River Between Vega And Altair
This week I am listening to “Somewhere At The Bottom Of The River Between Vega And Altair” by La Dispute
Paper – On Self-Regulated Swarms, Societal Memory, Speed and Dynamics
Today I read a paper titled “On Self-Regulated Swarms, Societal Memory, Speed and Dynamics”
The abstract is:
We propose a Self-Regulated Swarm (SRS) algorithm which hybridizes the advantageous characteristics of Swarm Intelligence as the emergence of a societal environmental memory or cognitive map via collective pheromone laying in the landscape (properly balancing the exploration/exploitation nature of our dynamic search strategy), with a simple Evolutionary mechanism that trough a direct reproduction procedure linked to local environmental features is able to self-regulate the above exploratory swarm population, speeding it up globally.
In order to test his adaptive response and robustness, we have recurred to different dynamic multimodal complex functions as well as to Dynamic Optimization Control problems, measuring reaction speeds and performance.
Final comparisons were made with standard Genetic Algorithms (GAs), Bacterial Foraging strategies (BFOA), as well as with recent Co-Evolutionary approaches.
SRS’s were able to demonstrate quick adaptive responses, while outperforming the results obtained by the other approaches.
Additionally, some successful behaviors were found.
One of the most interesting illustrate that the present SRS collective swarm of bio-inspired ant-like agents is able to track about 65% of moving peaks traveling up to ten times faster than the velocity of a single individual composing that precise swarm tracking system.
Paper – Metric State Space Reinforcement Learning for a Vision-Capable Mobile Robot
Today I read a paper titled “Metric State Space Reinforcement Learning for a Vision-Capable Mobile Robot”
The abstract is:
We address the problem of autonomously learning controllers for vision-capable mobile robots.
We extend McCallum’s (1995) Nearest-Sequence Memory algorithm to allow for general metrics over state-action trajectories.
We demonstrate the feasibility of our approach by successfully running our algorithm on a real mobile robot.
The algorithm is novel and unique in that it (a) explores the environment and learns directly on a mobile robot without using a hand-made computer model as an intermediate step, (b) does not require manual discretization of the sensor input space, (c) works in piecewise continuous perceptual spaces, and (d) copes with partial observability.
Together this allows learning from much less experience compared to previous methods.
Read – The Discworld Graphic Novels: The Colour of Magic & The Light Fantastic
Today I finished reading “The Discworld Graphic Novels: The Colour of Magic & The Light Fantastic” by Terry Pratchett
Paper – Artificial Immune Systems (AIS) – A New Paradigm for Heuristic Decision Making
Today I read a paper titled “Artificial Immune Systems (AIS) – A New Paradigm for Heuristic Decision Making”
The abstract is:
Over the last few years, more and more heuristic decision making techniques have been inspired by nature, e.g.
evolutionary algorithms, ant colony optimisation and simulated annealing.
More recently, a novel computational intelligence technique inspired by immunology has emerged, called Artificial Immune Systems (AIS).
This immune system inspired technique has already been useful in solving some computational problems.
In this keynote, we will very briefly describe the immune system metaphors that are relevant to AIS.
We will then give some illustrative real-world problems suitable for AIS use and show a step-by-step algorithm walkthrough.
A comparison of AIS to other well-known algorithms and areas for future work will round this keynote off.
It should be noted that as AIS is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from the examples given here.
Paper – A Novel Approach for Compression of Images Captured using Bayer Color Filter Arrays
Today I read a paper titled “A Novel Approach for Compression of Images Captured using Bayer Color Filter Arrays”
The abstract is:
We propose a new approach for image compression in digital cameras, where the goal is to achieve better quality at a given rate by using the characteristics of a Bayer color filter array.
Most digital cameras produce color images by using a single CCD plate, so that each pixel in an image has only one color component and therefore an interpolation method is needed to produce a full color image.
After the image processing stage, in order to reduce the memory requirements of the camera, a lossless or lossy compression stage often follows.
But in this scheme, before decreasing redundancy through compression, redundancy is increased in an interpolation stage.
In order to avoid increasing the redundancy before compression, we propose algorithms for image compression in which the order of the compression and interpolation stages is reversed.
We introduce image transform algorithms, since non interpolated images cannot be directly compressed with general image coders.
The simulation results show that our algorithm outperforms conventional methods with various color interpolation methods in a wide range of compression ratios.
Our proposed algorithm provides not only better quality but also lower encoding complexity because the amount of luminance data used is only half of that in conventional methods.
Paper – Social Learning Methods in Board Games
Today I read a paper titled “Social Learning Methods in Board Games”
The abstract is:
This paper discusses the effects of social learning in training of game playing agents.
The training of agents in a social context instead of a self-play environment is investigated.
Agents that use the reinforcement learning algorithms are trained in social settings.
This mimics the way in which players of board games such as scrabble and chess mentor each other in their clubs.
A Round Robin tournament and a modified Swiss tournament setting are used for the training.
The agents trained using social settings are compared to self play agents and results indicate that more robust agents emerge from the social training setting.
Higher state space games can benefit from such settings as diverse set of agents will have multiple strategies that increase the chances of obtaining more experienced players at the end of training.
The Social Learning trained agents exhibit better playing experience than self play agents.
The modified Swiss playing style spawns a larger number of better playing agents as the population size increases.
Paper – Robust Audio Watermarking Against the D/A and A/D conversions
Today I read a paper titled “Robust Audio Watermarking Against the D/A and A/D conversions”
The abstract is:
Audio watermarking has played an important role in multimedia security.
In many applications using audio watermarking, D/A and A/D conversions (denoted by DA/AD in this paper) are often involved.
In previous works, however, the robustness issue of audio watermarking against the DA/AD conversions has not drawn sufficient attention yet.
In our extensive investigation, it has been found that the degradation of a watermarked audio signal caused by the DA/AD conversions manifests itself mainly in terms of wave magnitude distortion and linear temporal scaling, making the watermark extraction failed.
Accordingly, a DWT-based audio watermarking algorithm robust against the DA/AD conversions is proposed in this paper.
To resist the magnitude distortion, the relative energy relationships among different groups of the DWT coefficients in the low-frequency sub-band are utilized in watermark embedding by adaptively controlling the embedding strength.
Furthermore, the resynchronization is designed to cope with the linear temporal scaling.
The time-frequency localization characteristics of DWT are exploited to save the computational load in the resynchronization.
Consequently, the proposed audio watermarking algorithm is robust against the DA/AD conversions, other common audio processing manipulations, and the attacks in StirMark Benchmark for Audio, which has been verified by experiments.
Read – Screw It, Let’s Do It
Today I finished reading “Screw It, Let’s Do It: Lessons In Life” by Richard Branson
Paper – The Role of Artificial Intelligence Technologies in Crisis Response
Today I read a paper titled “The Role of Artificial Intelligence Technologies in Crisis Response”
The abstract is:
Crisis response poses many of the most difficult information technology in crisis management.
It requires information and communication-intensive efforts, utilized for reducing uncertainty, calculating and comparing costs and benefits, and managing resources in a fashion beyond those regularly available to handle routine problems.
In this paper, we explore the benefits of artificial intelligence technologies in crisis response.
This paper discusses the role of artificial intelligence technologies; namely, robotics, ontology and semantic web, and multi-agent systems in crisis response.
Listening – We Sing, We Dance, We Steal Things
This week I am listening to “We Sing, We Dance, We Steal Things” by Jason Mraz
Paper – Learning to Bluff
Today I read a paper titled “Learning to Bluff”
The abstract is:
The act of bluffing confounds game designers to this day.
The very nature of bluffing is even open for debate, adding further complication to the process of creating intelligent virtual players that can bluff, and hence play, realistically.
Through the use of intelligent, learning agents, and carefully designed agent outlooks, an agent can in fact learn to predict its opponents reactions based not only on its own cards, but on the actions of those around it.
With this wider scope of understanding, an agent can in learn to bluff its opponents, with the action representing not an illogical action, as bluffing is often viewed, but rather as an act of maximising returns through an effective statistical optimisation.
By using a tee dee lambda learning algorithm to continuously adapt neural network agent intelligence, agents have been shown to be able to learn to bluff without outside prompting, and even to learn to call each others bluffs in free, competitive play.
Paper – Torque Ripple Minimization in a Switched Reluctance Drive by Neuro-Fuzzy Compensation
Today I read a paper titled “Torque Ripple Minimization in a Switched Reluctance Drive by Neuro-Fuzzy Compensation”
The abstract is:
Simple power electronic drive circuit and fault tolerance of converter are specific advantages of SRM drives, but excessive torque ripple has limited its use to special applications.
It is well known that controlling the current shape adequately can minimize the torque ripple.
This paper presents a new method for shaping the motor currents to minimize the torque ripple, using a neuro-fuzzy compensator.
In the proposed method, a compensating signal is added to the output of a PI controller, in a current-regulated speed control loop.
Numerical results are presented in this paper, with an analysis of the effects of changing the form of the membership function of the neuro-fuzzy compensator.
Listening – Random Album Title
This week I am listening to “Random Album Title” by deadmau5
Paper – Automatic Face Recognition System Based on Local Fourier-Bessel Features
Today I read a paper titled “Automatic Face Recognition System Based on Local Fourier-Bessel Features”
The abstract is:
We present an automatic face verification system inspired by known properties of biological systems.
In the proposed algorithm the whole image is converted from the spatial to polar frequency domain by a Fourier-Bessel Transform (FBT).
Using the whole image is compared to the case where only face image regions (local analysis) are considered.
The resulting representations are embedded in a dissimilarity space, where each image is represented by its distance to all the other images, and a Pseudo-Fisher discriminator is built.
Verification test results on the FERET database showed that the local-based algorithm outperforms the global-FBT version.
The local-FBT algorithm performed as state-of-the-art methods under different testing conditions, indicating that the proposed system is highly robust for expression, age, and illumination variations.
We also evaluated the performance of the proposed system under strong occlusion conditions and found that it is highly robust for up to 50% of face occlusion.
Finally, we automated completely the verification system by implementing face and eye detection algorithms.
Under this condition, the local approach was only slightly superior to the global approach.
Read – The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music
Today I finished reading “The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music” by Juan Romero
Paper – Haptic sensing for MEMS with application for cantilever and Casimir effect
Today I read a paper titled “Haptic sensing for MEMS with application for cantilever and Casimir effect”
The abstract is:
This paper presents an implementation of the Cosserat theory into haptic sensing technologies for real-time simulation of microstructures.
Cosserat theory is chosen instead of the classical theory of elasticity for a better representation of stress, especially in the nonlinear regime.
The use of Cosserat theory leads to a reduction of the complexity of the modelling and thus increases its capability for real time simulation which is indispensable for haptic technologies.
The incorporation of Cosserat theory into haptic sensing technology enables the designer to simulate in real-time the components in a virtual reality environment (VRE) which can enable virtual manufacturing and prototyping.
The software tool created as a result of this methodology demonstrates the feasibility of the proposed model.
As test demonstrators, a cantilever microbeam and microbridge undergoing bending in VRE are presented.
Paper – On the Performance of Joint Fingerprint Embedding and Decryption Scheme
Today I read a paper titled “On the Performance of Joint Fingerprint Embedding and Decryption Scheme”
The abstract is:
Till now, few work has been done to analyze the performances of joint fingerprint embedding and decryption schemes.
In this paper, the security of the joint fingerprint embedding and decryption scheme proposed by Kundur et al.
is analyzed and improved.
The analyses include the security against unauthorized customer, the security against authorized customer, the relationship between security and robustness, the relationship between secu-rity and imperceptibility and the perceptual security.
Based these analyses, some means are proposed to strengthen the system, such as multi-key encryp-tion and DC coefficient encryption.
The method can be used to analyze other JFD schemes.
It is expected to provide valuable information to design JFD schemes.
Read – The On-Time, On-Target Manager
Today I finished reading “The On-Time, On-Target Manager: How a “Last-Minute Manager” Conquered Procrastination” by Kenneth Blanchard
Listening – Hold On Now, Youngster…
This week I am listening to “Hold On Now, Youngster…” by Los Campesinos!
Greatest weakness is money
“So what’s your greatest weakness?” asked the HR person.
“I actually expect to get paid market rate for my consulting services and have my invoices paid on time.” I replied.
“Well if all you care about is the money and not the opportunities for growth, this probably isn’t the contracting role for you.” retorted the interviewer.
“You’re right.” I nodded, got up, and left.
Paper – Feature Markov Decision Processes
Today I read a paper titled “Feature Markov Decision Processes”
The abstract is:
General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards.
On the other hand, reinforcement learning is well-developed for small finite state Markov Decision Processes (MDPs).
So far it is an art performed by human designers to extract the right state representation out of the bare observations, i.e.
to reduce the agent setup to the MDP framework.
Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion.
The main contribution of this article is to develop such a criterion.
I also integrate the various parts into one learning algorithm.
Extensions to more realistic dynamic Bayesian networks are developed in a companion article.
Paper – A Recommender System based on Idiotypic Artificial Immune Networks
Today I read a paper titled “A Recommender System based on Idiotypic Artificial Immune Networks”
The abstract is:
The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature.
This paper presents an Artificial Immune System (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by Collaborative Filtering (CF).
Natural evolution and in particular the immune system have not been designed for classical optimisation.
However, for this problem, we are not interested in finding a single optimum.
Rather we intend to identify a sub-set of good matches on which recommendations can be based.
It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen-antibody interaction for matching and idiotypic antibody-antibody interaction for diversity.
Computational results are presented in support of this conjecture and compared to those found by other CF techniques.
Paper – Scheduling Algorithms for Procrastinators
Today I read a paper titled “Scheduling Algorithms for Procrastinators”
The abstract is:
This paper presents scheduling algorithms for procrastinators, where the speed that a procrastinator executes a job increases as the due date approaches.
We give optimal off-line scheduling policies for linearly increasing speed functions.
We then explain the computational/numerical issues involved in implementing this policy.
We next explore the online setting, showing that there exist adversaries that force any online scheduling policy to miss due dates.
This impossibility result motivates the problem of minimizing the maximum interval stretch of any job; the interval stretch of a job is the job’s flow time divided by the job’s due date minus release time.
We show that several common scheduling strategies, including the “hit-the-highest-nail” strategy beloved by procrastinators, have arbitrarily large maximum interval stretch.
Then we give the “thrashing” scheduling policy and show that it is a \Theta(1) approximation algorithm for the maximum interval stretch.
Studying – Advanced Photoshop techniques
This month I am studying “Advanced Photoshop techniques”
Listening – We Started Nothing
This week I am listening to “We Started Nothing” by The Ting Tings