This week I am listening to “Shallow Grave” by The Tallest Man On Earth
Read – Outliers
Today I finished reading “Outliers: The Story of Success” by Malcolm Gladwell
Read – Complex Variables Demystified
Today I finished reading “Complex Variables Demystified” by David McMahon
Read – Too Big to Fail
Today I finished reading “Too Big to Fail: The Inside Story of How Wall Street and Washington Fought to Save the Financial System from Crisis and Themselves” by Andrew Ross Sorkin
Paper – Predicting the Path of an Open System
Today I read a paper titled “Predicting the Path of an Open System”
The abstract is:
The expected path of an open system,which is a big Poincare system,has been found in this paper.This path has been obtained from the actual and from the expected droop of the open system.The actual droop has been reconstructed from the variations in the power and in the frequency of the open system.The expected droop has been found as a function of rotation from the expected potential energy of the open system under synchronization of that system.
Studying – Creating icons with Photoshop
This month I am studying “Creating icons with Photoshop”
Listening – Insurgentes
This week I am listening to “Insurgentes” by Steven Wilson
Paper – Efficient Open World Reasoning for Planning
Today I read a paper titled “Efficient Open World Reasoning for Planning”
The abstract is:
We consider the problem of reasoning and planning with incomplete knowledge and deterministic actions.
We introduce a knowledge representation scheme called PSIPLAN that can effectively represent incompleteness of an agent’s knowledge while allowing for sound, complete and tractable entailment in domains where the set of all objects is either unknown or infinite.
We present a procedure for state update resulting from taking an action in PSIPLAN that is correct, complete and has only polynomial complexity.
State update is performed without considering the set of all possible worlds corresponding to the knowledge state.
As a result, planning with PSIPLAN is done without direct manipulation of possible worlds.
PSIPLAN representation underlies the PSIPOP planning algorithm that handles quantified goals with or without exceptions that no other domain independent planner has been shown to achieve.
PSIPLAN has been implemented in Common Lisp and used in an application on planning in a collaborative interface.
Read – iWoz: Computer Geek to Cult Icon
Today I finished reading “iWoz: Computer Geek to Cult Icon: How I Invented the Personal Computer, Co-Founded Apple, and Had Fun Doing It” by Steve Wozniak
Read – Startup Guide to Guerrilla Marketing
Today I finished reading “Startup Guide to Guerrilla Marketing: A Simple Battle Plan For Boosting Profits” by Jay Conrad Levinson
Paper – Faster and better: a machine learning approach to corner detection
Today I read a paper titled “Faster and better: a machine learning approach to corner detection”
The abstract is:
The repeatability and efficiency of a corner detector determines how likely it is to be useful in a real-world application.
The repeatability is importand because the same scene viewed from different positions should yield features which correspond to the same real-world 3D locations [Schmid et al 2000].
The efficiency is important because this determines whether the detector combined with further processing can operate at frame rate.
Three advances are described in this paper.
First, we present a new heuristic for feature detection, and using machine learning we derive a feature detector from this which can fully process live PAL video using less than 5% of the available processing time.
By comparison, most other detectors cannot even operate at frame rate (Harris detector 115%, SIFT 195%).
Second, we generalize the detector, allowing it to be optimized for repeatability, with little loss of efficiency.
Third, we carry out a rigorous comparison of corner detectors based on the above repeatability criterion applied to 3D scenes.
We show that despite being principally constructed for speed, on these stringent tests, our heuristic detector significantly outperforms existing feature detectors.
Finally, the comparison demonstrates that using machine learning produces significant improvements in repeatability, yielding a detector that is both very fast and very high quality.
Read – Clean Code
Today I finished reading “Clean Code: A Handbook of Agile Software Craftsmanship” by Robert Martin
Listening – 808s And Heartbreak
This week I am listening to “808s And Heartbreak” by Kanye West
Paper – Alignment of Speech to Highly Imperfect Text Transcriptions
Today I read a paper titled “Alignment of Speech to Highly Imperfect Text Transcriptions”
The abstract is:
We introduce a novel and inexpensive approach for the temporal alignment of speech to highly imperfect transcripts from automatic speech recognition (ASR).
Transcripts are generated for extended lecture and presentation videos, which in some cases feature more than 30 speakers with different accents, resulting in highly varying transcription qualities.
In our approach we detect a subset of phonemes in the speech track, and align them to the sequence of phonemes extracted from the transcript.
We report on the results for 4 speech-transcript sets ranging from 22 to 108 minutes.
The alignment performance is promising, showing a correct matching of phonemes within 10, 20, 30 second error margins for more than 60%, 75%, 90% of text, respectively, on average.
Paper – Movie Recommendation Systems Using An Artificial Immune System
Today I read a paper titled “Movie Recommendation Systems Using An Artificial Immune System”
The abstract is:
We apply the Artificial Immune System (AIS) technology to the Collaborative Filtering (CF) technology when we build the movie recommendation system.
Two different affinity measure algorithms of AIS, Kendall tau and Weighted Kappa, are used to calculate the correlation coefficients for this movie recommendation system.
From the testing we think that Weighted Kappa is more suitable than Kendall tau for movie problems.
Read – Masterminds of Programming
Today I finished reading “Masterminds of Programming: Conversations with the Creators of Major Programming Languages” by Federico Biancuzzi
Listening – Devotion
This week I am listening to “Devotion” by Beach House
Paper – Differential Methods in Catadioptric Sensor Design with Applications to Panoramic Imaging
Today I read a paper titled “Differential Methods in Catadioptric Sensor Design with Applications to Panoramic Imaging”
The abstract is:
We discuss design techniques for catadioptric sensors that realize given projections.
In general, these problems do not have solutions, but approximate solutions may often be found that are visually acceptable.
There are several methods to approach this problem, but here we focus on what we call the “vector field approach”.
An application is given where a true panoramic mirror is derived, i.e.
a mirror that yields a cylindrical projection to the viewer without any digital unwarping.
Read – Smart and Gets Things Done
Today I finished reading “Smart and Gets Things Done: Joel Spolsky’s Concise Guide to Finding the Best Technical Talent” by Joel Spolsky
Paper – Checking Equivalence of Quantum Circuits and States
Today I read a paper titled “Checking Equivalence of Quantum Circuits and States”
The abstract is:
Quantum computing promises exponential speed-ups for important simulation and optimization problems.
It also poses new CAD problems that are similar to, but more challenging, than the related problems in classical (non-quantum) CAD, such as determining if two states or circuits are functionally equivalent.
While differences in classical states are easy to detect, quantum states, which are represented by complex-valued vectors, exhibit subtle differences leading to several notions of equivalence.
This provides flexibility in optimizing quantum circuits, but leads to difficult new equivalence-checking issues for simulation and synthesis.
We identify several different equivalence-checking problems and present algorithms for practical benchmarks, including quantum communication and search circuits, which are shown to be very fast and robust for hundreds of qubits.
Listening – Cage The Elephant
This week I am listening to “Cage The Elephant” by Cage The Elephant
Paper – The Physical World as a Virtual Reality
Today I read a paper titled “The Physical World as a Virtual Reality”
The abstract is:
This paper explores the idea that the universe is a virtual reality created by information processing, and relates this strange idea to the findings of modern physics about the physical world.
The virtual reality concept is familiar to us from online worlds, but our world as a virtual reality is usually a subject for science fiction rather than science.
Yet logically the world could be an information simulation running on a multi-dimensional space-time screen.
Indeed, if the essence of the universe is information, matter, charge, energy and movement could be aspects of information, and the many conservation laws could be a single law of information conservation.
If the universe were a virtual reality, its creation at the big bang would no longer be paradoxical, as every virtual system must be booted up.
It is suggested that whether the world is an objective reality or a virtual reality is a matter for science to resolve.
Modern information science can suggest how core physical properties like space, time, light, matter and movement could derive from information processing.
Such an approach could reconcile relativity and quantum theories, with the former being how information processing creates space-time, and the latter how it creates energy and matter.
Paper – A Computational Study on Emotions and Temperament in Multi-Agent Systems
Today I read a paper titled “A Computational Study on Emotions and Temperament in Multi-Agent Systems”
The abstract is:
Recent advances in neurosciences and psychology have provided evidence that affective phenomena pervade intelligence at many levels, being inseparable from the cognitionaction loop.
Perception, attention, memory, learning, decisionmaking, adaptation, communication and social interaction are some of the aspects influenced by them.
This work draws its inspirations from neurobiology, psychophysics and sociology to approach the problem of building autonomous robots capable of interacting with each other and building strategies based on temperamental decision mechanism.
Modelling emotions is a relatively recent focus in artificial intelligence and cognitive modelling.
Such models can ideally inform our understanding of human behavior.
We may see the development of computational models of emotion as a core research focus that will facilitate advances in the large array of computational systems that model, interpret or influence human behavior.
We propose a model based on a scalable, flexible and modular approach to emotion which allows runtime evaluation between emotional quality and performance.
The results achieved showed that the strategies based on temperamental decision mechanism strongly influence the system performance and there are evident dependency between emotional state of the agents and their temperamental type, as well as the dependency between the team performance and the temperamental configuration of the team members, and this enable us to conclude that the modular approach to emotional programming based on temperamental theory is the good choice to develop computational mind models for emotional behavioral Multi-Agent systems.
Studying – Type effects in Photoshop
This month I am studying “Type effects in Photoshop”
Listening – Vampire Weekend
This week I am listening to “Vampire Weekend” by Vampire Weekend
Paper – Virtual Reality Simulation of Fire Fighting Robot Dynamic and Motion
Today I read a paper titled “Virtual Reality Simulation of Fire Fighting Robot Dynamic and Motion”
The abstract is:
This paper presents one approach in designing a Fire Fighting Robot which has been contested annually in a robotic student competition in many countries following the rules initiated at the Trinity College.
The approach makes use of computer simulation and animation in a virtual reality environment.
In the simulation, the amount of time, starting from home until the flame is destroyed, can be confirmed.
The efficacy of algorithms and parameter values employed can be easily evaluated.
Rather than spending time building the real robot in a trial and error fashion, now students can explore more variation of algorithm, parameter and sensor-actuator configuration in the early stage of design.
Besides providing additional excitement during learning process and enhancing students understanding to the engineering aspects of the design, this approach could become a useful tool to increase the chance of winning the contest.
Paper – Local search heuristics: Fitness Cloud versus Fitness Landscape
Today I read a paper titled “Local search heuristics: Fitness Cloud versus Fitness Landscape”
The abstract is:
This paper introduces the concept of fitness cloud as an alternative way to visualize and analyze search spaces than given by the geographic notion of fitness landscape.
It is argued that the fitness cloud concept overcomes several deficiencies of the landscape representation.
Our analysis is based on the correlation between fitness of solutions and fitnesses of nearest solutions according to some neighboring.
We focus on the behavior of local search heuristics, such as hill climber, on the well-known NK fitness landscape.
In both cases the fitness vs.
fitness correlation is shown to be related to the epistatic parameter K.
Paper – A Predictive Theory of Games
Today I read a paper titled “A Predictive Theory of Games”
The abstract is:
Conventional noncooperative game theory hypothesizes that the joint strategy of a set of players in a game must satisfy an “equilibrium concept”.
All other joint strategies are considered impossible; the only issue is what equilibrium concept is “correct”.
This hypothesis violates the desiderata underlying probability theory.
Indeed, probability theory renders moot the problem of what equilibrium concept is correct – every joint strategy can arise with non-zero probability.
Rather than a first-principles derivation of an equilibrium concept, game theory requires a first-principles derivation of a distribution over joint (mixed) strategies.
This paper shows how information theory can provide such a distribution over joint strategies.
If a scientist external to the game wants to distill such a distribution to a point prediction, that prediction should be set by decision theory, using their (!) loss function.
So the predicted joint strategy – the “equilibrium concept” – varies with the external scientist’s loss function.
It is shown here that in many games, having a probability distribution with support restricted to Nash equilibria – as stipulated by conventional game theory – is impossible.
It is also show how to: i) Derive an information-theoretic quantification of a player’s degree of rationality; ii) Derive bounded rationality as a cost of computation; iii) Elaborate the close formal relationship between game theory and statistical physics; iv) Use this relationship to extend game theory to allow stochastically varying numbers of players.
Read – Go Put Your Strengths to Work
Today I finished reading “Go Put Your Strengths to Work: 6 Powerful Steps to Achieve Outstanding Performance” by Marcus Buckingham
Read – Number
Today I finished reading “Number: The Language of Science” by Tobias Dantzig
Read – Azumanga Daioh: The Omnibus
Today I finished reading “Azumanga Daioh: The Omnibus” by Kiyohiko Azuma
Paper – Truecluster: robust scalable clustering with model selection
Today I read a paper titled “Truecluster: robust scalable clustering with model selection”
The abstract is:
Data-based classification is fundamental to most branches of science.
While recent years have brought enormous progress in various areas of statistical computing and clustering, some general challenges in clustering remain: model selection, robustness, and scalability to large datasets.
We consider the important problem of deciding on the optimal number of clusters, given an arbitrary definition of space and clusteriness.
We show how to construct a cluster information criterion that allows objective model selection.
Differing from other approaches, our truecluster method does not require specific assumptions about underlying distributions, dissimilarity definitions or cluster models.
Truecluster puts arbitrary clustering algorithms into a generic unified (sampling-based) statistical framework.
It is scalable to big datasets and provides robust cluster assignments and case-wise diagnostics.
Truecluster will make clustering more objective, allows for automation, and will save time and costs.
Free R software is available.
Paper – Get out the vote: Determining support or opposition from Congressional floor-debate transcripts
Today I read a paper titled “Get out the vote: Determining support or opposition from Congressional floor-debate transcripts”
The abstract is:
We investigate whether one can determine from the transcripts of U.S.
Congressional floor debates whether the speeches represent support of or opposition to proposed legislation.
To address this problem, we exploit the fact that these speeches occur as part of a discussion; this allows us to use sources of information regarding relationships between discourse segments, such as whether a given utterance indicates agreement with the opinion expressed by another.
We find that the incorporation of such information yields substantial improvements over classifying speeches in isolation.
Listening – Scream Aim Fire
This week I am listening to “Scream Aim Fire” by Bullet For My Valentine
Read – iPhone Game Development
Today I finished reading “iPhone Game Development” by Paul Zirkle
Read – Jill Spiegel’s How To Talk To Anyone About Anything!
Today I finished reading “Jill Spiegel’s How To Talk To Anyone About Anything!” by Jill Spiegel
Read – On the Decay of the Art of Lying
Today I finished reading “On the Decay of the Art of Lying” by Mark Twain
Paper – Incremental Recompilation of Knowledge
Today I read a paper titled “Incremental Recompilation of Knowledge”
The abstract is:
Approximating a general formula from above and below by Horn formulas (its Horn envelope and Horn core, respectively) was proposed by Selman and Kautz (1991, 1996) as a form of “knowledge compilation,” supporting rapid approximate reasoning; on the negative side, this scheme is static in that it supports no updates, and has certain complexity drawbacks pointed out by Kavvadias, Papadimitriou and Sideri (1993).
On the other hand, the many frameworks and schemes proposed in the literature for theory update and revision are plagued by serious complexity-theoretic impediments, even in the Horn case, as was pointed out by Eiter and Gottlob (1992), and is further demonstrated in the present paper.
More fundamentally, these schemes are not inductive, in that they may lose in a single update any positive properties of the represented sets of formulas (small size, Horn structure, etc.).
In this paper we propose a new scheme, incremental recompilation, which combines Horn approximation and model-based updates; this scheme is inductive and very efficient, free of the problems facing its constituents.
A set of formulas is represented by an upper and lower Horn approximation.
To update, we replace the upper Horn formula by the Horn envelope of its minimum-change update, and similarly the lower one by the Horn core of its update; the key fact which enables this scheme is that Horn envelopes and cores are easy to compute when the underlying formula is the result of a minimum-change update of a Horn formula by a clause.
We conjecture that efficient algorithms are possible for more complex updates.
Listening – Santogold
This week I am listening to “Santogold” by Santigold
Read – Talent is Overrated
Today I finished reading “Talent is Overrated: What Really Separates World-Class Performers from Everybody Else” by Geoff Colvin
Read – The Back of the Napkin
Today I finished reading “The Back of the Napkin: Solving Problems and Selling Ideas with Pictures” by Dan Roam
Paper – Multiagent Control of Self-reconfigurable Robots
Today I read a paper titled “Multiagent Control of Self-reconfigurable Robots”
The abstract is:
We demonstrate how multiagent systems provide useful control techniques for modular self-reconfigurable (metamorphic) robots.
Such robots consist of many modules that can move relative to each other, thereby changing the overall shape of the robot to suit different tasks.
Multiagent control is particularly well-suited for tasks involving uncertain and changing environments.
We illustrate this approach through simulation experiments of Proteo, a metamorphic robot system currently under development.
Paper – The source coding game with a cheating switcher
Today I read a paper titled “The source coding game with a cheating switcher”
The abstract is:
Motivated by the lossy compression of an active-vision video stream, we consider the problem of finding the rate-distortion function of an arbitrarily varying source (AVS) composed of a finite number of subsources with known distributions.
Berger’s paper `The Source Coding Game’, \emph{IEEE Trans.
Inform.
Theory}, 1971, solves this problem under the condition that the adversary is allowed only strictly causal access to the subsource realizations.
We consider the case when the adversary has access to the subsource realizations non-causally.
Using the type-covering lemma, this new rate-distortion function is determined to be the maximum of the IID rate-distortion function over a set of source distributions attainable by the adversary.
We then extend the results to allow for partial or noisy observations of subsource realizations.
We further explore the model by attempting to find the rate-distortion function when the adversary is actually helpful.
Finally, a bound is developed on the uniform continuity of the IID rate-distortion function for finite-alphabet sources.
The bound is used to give a sufficient number of distributions that need to be sampled to compute the rate-distortion function of an AVS to within a certain accuracy.
The bound is also used to give a rate of convergence for the estimate of the rate-distortion function for an unknown IID finite-alphabet source .
Listening – Walking On A Dream
This week I am listening to “Walking On A Dream” by Empire Of The Sun
Read – The Fluorescent Light Glistens off Your Head
Today I finished reading “The Fluorescent Light Glistens off Your Head” by Scott Adams
Paper – A Robust and Efficient Three-Layered Dialogue Component for a Speech-to-Speech Translation System
Today I read a paper titled “A Robust and Efficient Three-Layered Dialogue Component for a Speech-to-Speech Translation System”
The abstract is:
We present the dialogue component of the speech-to-speech translation system VERBMOBIL.
In contrast to conventional dialogue systems it mediates the dialogue while processing maximally 50% of the dialogue in depth.
Special requirements like robustness and efficiency lead to a 3-layered hybrid architecture for the dialogue module, using statistics, an automaton and a planner.
A dialogue memory is constructed incrementally..
Studying – Imitating oils in digital media
This month I am studying “Imitating oils in digital media”
Paper – A multilateral filtering method applied to airplane runway image
Today I read a paper titled “A multilateral filtering method applied to airplane runway image”
The abstract is:
By considering the features of the airport runway image filtering, an improved bilateral filtering method was proposed which can remove noise with edge preserving.
Firstly the steerable filtering decomposition is used to calculate the sub-band parameters of 4 orients, and the texture feature matrix is then obtained from the sub-band local median energy.
The texture similar, the spatial closer and the color similar functions are used to filter the image.The effect of the weighting function parameters is qualitatively analyzed also.
In contrast with the standard bilateral filter and the simulation results for the real airport runway image show that the multilateral filtering is more effective than the standard bilateral filtering.
Listening – Antidotes
This week I am listening to “Antidotes” by Foals
Paper – The topology of covert conflict
Today I read a paper titled “The topology of covert conflict”
The abstract is:
Often an attacker tries to disconnect a network by destroying nodes or edges, while the defender counters using various resilience mechanisms.
Examples include a music industry body attempting to close down a peer-to-peer file-sharing network; medics attempting to halt the spread of an infectious disease by selective vaccination; and a police agency trying to decapitate a terrorist organisation.
Albert, Jeong and Barabasi famously analysed the static case, and showed that vertex-order attacks are effective against scale-free networks.
We extend this work to the dynamic case by developing a framework based on evolutionary game theory to explore the interaction of attack and defence strategies.
We show, first, that naive defences don’t work against vertex-order attack; second, that defences based on simple redundancy don’t work much better, but that defences based on cliques work well; third, that attacks based on centrality work better against clique defences than vertex-order attacks do; and fourth, that defences based on complex strategies such as delegation plus clique resist centrality attacks better than simple clique defences.
Our models thus build a bridge between network analysis and evolutionary game theory, and provide a framework for analysing defence and attack in networks where topology matters.
They suggest definitions of efficiency of attack and defence, and may even explain the evolution of insurgent organisations from networks of cells to a more virtual leadership that facilitates operations rather than directing them.
Finally, we draw some conclusions and present possible directions for future research.