This week I am listening to “English Electric (Part One)” by Big Big Train
Watching – Ip Man 2
Today I watched “Ip Man 2”
Listening – 1999
This week I am listening to “1999” by Joey Bada$$
Read – Perfect Phrases for Meetings
Today I finished reading “Perfect Phrases for Meetings” by Don Debelak
Read – Programming in Lua
Today I finished reading “Programming in Lua” by Roberto Ierusalimschy
Watching – Finding Forrester
Today I watched “Finding Forrester”
Read – The One Thing
Today I finished reading “The One Thing: The Surprisingly Simple Truth Behind Extraordinary Results” by Gary Keller
Paper – Characterizing the speed and paths of shared bicycles in Lyon
Today I read a paper titled “Characterizing the speed and paths of shared bicycles in Lyon”
The abstract is:
Thanks to numerical data gathered by Lyon’s shared bicycling system V\’elo’v, we are able to analyze 11.6 millions bicycle trips, leading to the first robust characterization of urban bikers’ behaviors.
We show that bicycles outstrip cars in downtown Lyon, by combining high speed and short paths.These data also allows us to calculate V\’elo’v fluxes on all streets, pointing to interesting locations for bike paths.
Watching – The Raven
Today I watched “The Raven”
Listening – Come Of Age
This week I am listening to “Come Of Age” by The Vaccines
Read – Whale Done!
Today I finished reading “Whale Done!: The Power of Positive Relationships” by Kenneth Blanchard
Read – The Demon-Haunted World
Today I finished reading “The Demon-Haunted World: Science as a Candle in the Dark” by Carl Sagan
Paper – Evaluation of Three Vision Based Object Perception Methods for a Mobile Robot
Today I read a paper titled “Evaluation of Three Vision Based Object Perception Methods for a Mobile Robot”
The abstract is:
This paper addresses object perception applied to mobile robotics.
Being able to perceive semantically meaningful objects in unstructured environments is a key capability in order to make robots suitable to perform high-level tasks in home environments.
However, finding a solution for this task is daunting: it requires the ability to handle the variability in image formation in a moving camera with tight time constraints.
The paper brings to attention some of the issues with applying three state of the art object recognition and detection methods in a mobile robotics scenario, and proposes methods to deal with windowing/segmentation.
Thus, this work aims at evaluating the state-of-the-art in object perception in an attempt to develop a lightweight solution for mobile robotics use/research in typical indoor settings.
Studying – How to draw Better & faster with Illustrator
This month I am studying “How to draw Better & faster with Illustrator”
I think this is going to be the last Adobe Illustrator class I take for a while. Getting burned out on the first few hours of each class “introducing Illustrator” before I get to the meat of the knowledge.
Listening – Bloom
This week I am listening to “Bloom” by Beach House
Salt – the secret ingredient
“Do you use sea salt?” asked the acquaintance I ran in to at a BBQ where we got around to discussing my culinary studies.
“When it’s appropriate.” I responded.
“Ah, I knew it. That’s what your secret is. That’s how you made that dish taste how it did.”
And I just stood there, in silence and a little smug, not because this acquaintance had figured out my culinary secret but because it was the software development equivalent of figuring out which super-awesome secret C++ compiler I used to create award winning games.
Paper – Nonlinear Receding-Horizon Control of Rigid Link Robot Manipulators
Today I read a paper titled “Nonlinear Receding-Horizon Control of Rigid Link Robot Manipulators”
The abstract is:
The approximate nonlinear receding-horizon control law is used to treat the trajectory tracking control problem of rigid link robot manipulators.
The derived nonlinear predictive law uses a quadratic performance index of the predicted tracking error and the predicted control effort.
A key feature of this control law is that, for their implementation, there is no need to perform an online optimization, and asymptotic tracking of smooth reference trajectories is guaranteed.
It is shown that this controller achieves the positions tracking objectives via link position measurements.
The stability convergence of the output tracking error to the origin is proved.
To enhance the robustness of the closed loop system with respect to payload uncertainties and viscous friction, an integral action is introduced in the loop.
A nonlinear observer is used to estimate velocity.
Simulation results for a two-link rigid robot are performed to validate the performance of the proposed controller.
Keywords: receding-horizon control, nonlinear observer, robot manipulators, integral action, robustness.
Watching – The First Wives Club
Today I watched “The First Wives Club”
Listening – The Heist
This week I am listening to “The Heist” by Macklemore & Ryan Lewis
Read – To The Stars
Today I finished reading “To The Stars” by Robert Heinlein
Read – Man-Kzin Wars 3
Today I finished reading “Man-Kzin Wars 3” by Larry Niven
Read – Computational Modeling of Narrative
Today I finished reading “Computational Modeling of Narrative” by Inderjeet Mani
Listening – Born To Die
This week I am listening to “Born To Die” by Lana Del Rey
Read – Elementary Particles and the Laws of Physics
Today I finished reading “Elementary Particles and the Laws of Physics: The 1986 Dirac Memorial Lectures” by Richard Feynman
Paper – Artificial Intelligence Techniques for Steam Generator Modelling
Today I read a paper titled “Artificial Intelligence Techniques for Steam Generator Modelling”
The abstract is:
This paper investigates the use of different Artificial Intelligence methods to predict the values of several continuous variables from a Steam Generator.
The objective was to determine how the different artificial intelligence methods performed in making predictions on the given dataset.
The artificial intelligence methods evaluated were Neural Networks, Support Vector Machines, and Adaptive Neuro-Fuzzy Inference Systems.
The types of neural networks investigated were Multi-Layer Perceptions, and Radial Basis Function.
Bayesian and committee techniques were applied to these neural networks.
Each of the AI methods considered was simulated in Matlab.
The results of the simulations showed that all the AI methods were capable of predicting the Steam Generator data reasonably accurately.
However, the Adaptive Neuro-Fuzzy Inference system out performed the other methods in terms of accuracy and ease of implementation, while still achieving a fast execution time as well as a reasonable training time.
I’m not rude, I’m programming
When you interrupt a programmer and they respond with “WHAT?!?” or don’t even pay attention to you, they’re not being rude to you.
Point 1: They probably haven’t even heard you.
But most importantly;
Point 2: That “WHAT?!?” any programmer who is focused on their task just responded with as though you are the last person on Earth they want to see, is actually the programmer restarting their speech centers.
Programmers, and all creators who get in the zone, and it even happens to people who deep read, cycle down the “polite discourse and capable speech for a functioning society” part of their brain when it is not being used. It’s a speech app that got unloaded from memory because it wasn’t needed. The first few seconds of responsiveness you get from someone in the zone are the primal speech patterns responding because they boot up faster and come on-line sooner. The “polite society” module takes longer to load (it’s really bloated because it was designed by committee) so the first responses can be an affront to what you consider “professional behaviour.”
You should no more expect a civil response, that part of a programmer’s brain just doesn’t exist at that moment, than you should expect a cat to show you affection – again, that part of the cat’s brain just doesn’t exist.
Paper – The Inverse Task of the Reflexive Game Theory: Theoretical Matters, Practical Applications and Relationship with Other Issues
Today I read a paper titled “The Inverse Task of the Reflexive Game Theory: Theoretical Matters, Practical Applications and Relationship with Other Issues”
The abstract is:
The Reflexive Game Theory (RGT) has been recently proposed by Vladimir Lefebvre to model behavior of individuals in groups.
The goal of this study is to introduce the Inverse task.
We consider methods of solution together with practical applications.
We present a brief overview of the RGT for easy understanding of the problem.
We also develop the schematic representation of the RGT inference algorithms to create the basis for soft- and hardware solutions of the RGT tasks.
We propose a unified hierarchy of schemas to represent humans and robots.
This hierarchy is considered as a unified framework to solve the entire spectrum of the RGT tasks.
We conclude by illustrating how this framework can be applied for modeling of mixed groups of humans and robots.
All together this provides the exhaustive solution of the Inverse task and clearly illustrates its role and relationships with other issues considered in the RGT.
Paper – Lexical Knowledge Representation in an Intelligent Dictionary Help System
Today I read a paper titled “Lexical Knowledge Representation in an Intelligent Dictionary Help System”
The abstract is:
The frame-based knowledge representation model adopted in IDHS (Intelligent Dictionary Help System) is described in this paper.
It is used to represent the lexical knowledge acquired automatically from a conventional dictionary.
Moreover, the enrichment processes that have been performed on the Dictionary Knowledge Base and the dynamic exploitation of this knowledge – both based on the exploitation of the properties of lexical semantic relations – are also described..
Listening – Tempest
This week I am listening to “Tempest” by Bob Dylan
Read – Intron Depot 3: Ballistics
Today I finished reading “Intron Depot 3: Ballistics” by Masamune Shirow
Paper – An Eye Tracking Study into the Effects of Graph Layout
Today I read a paper titled “An Eye Tracking Study into the Effects of Graph Layout”
The abstract is:
Graphs are typically visualized as node-link diagrams.
Although there is a fair amount of research focusing on crossing minimization to improve readability, little attention has been paid on how to handle crossings when they are an essential part of the final visualizations.
This requires us to understand how people read graphs and how crossings affect reading performance.
As an initial step to this end, a preliminary eye tracking experiment was conducted.
The specific purpose of this experiment was to test the effects of crossing angles and geometric-path tendency on eye movements and performance.
Sixteen subjects performed both path search and node locating tasks with six drawings.
The results showed that small angles can slow down and trigger extra eye movements, causing delays for path search tasks, whereas crossings have little impact on node locating tasks.
Geometric-path tendency indicates that a path between two nodes can become harder to follow when many branches of the path go toward the target node.
The insights obtained are discussed with a view to further confirmation in future work.
Read – The Feynman Lectures on Physics Vol 20
Today I finished reading “The Feynman Lectures on Physics Vol 20” by Richard Feynman
Paper – A new approach for digit recognition based on hand gesture analysis
Today I read a paper titled “A new approach for digit recognition based on hand gesture analysis”
The abstract is:
We present in this paper a new approach for hand gesture analysis that allows digit recognition.
The analysis is based on extracting a set of features from a hand image and then combining them by using an induction graph.
The most important features we extract from each image are the fingers locations, their heights and the distance between each pair of fingers.
Our approach consists of three steps: (i) Hand detection and localization, (ii) fingers extraction and (iii) features identification and combination to digit recognition.
Each input image is assumed to contain only one person, thus we apply a fuzzy classifier to identify the skin pixels.
In the finger extraction step, we attempt to remove all the hand components except the fingers, this process is based on the hand anatomy properties.
The final step consists on representing histogram of the detected fingers in order to extract features that will be used for digit recognition.
The approach is invariant to scale, rotation and translation of the hand.
Some experiments have been undertaken to show the effectiveness of the proposed approach.
Read – Master of Space and Time
Today I finished reading “Master of Space and Time” by Rudy Rucker
Studying – Creating icons with Illustrator
This month I am studying “Creating icons with Illustrator”
Read – Maximum Ride #1
Today I finished reading “Maximum Ride #1” by James Patterson
Listening – What We Saw From The Cheap Seats
This week I am listening to “What We Saw From The Cheap Seats” by Regina Spektor
Read – The Magic of Thinking Big
Today I finished reading “The Magic of Thinking Big” by David Schwartz
Paper – Evolving knowledge through negotiation
Today I read a paper titled “Evolving knowledge through negotiation”
The abstract is:
Semantic web information is at the extremities of long pipelines held by human beings.
They are at the origin of information and they will consume it either explicitly because the information will be delivered to them in a readable way, or implicitly because the computer processes consuming this information will affect them.
Computers are particularly capable of dealing with information the way it is provided to them.
However, people may assign to the information they provide a narrower meaning than semantic technologies may consider.
This is typically what happens when people do not think their assertions as ambiguous.
Model theory, used to provide semantics to the information on the semantic web, is particularly apt at preserving ambiguity and delivering it to the other side of the pipeline.
Indeed, it preserves as much interpretations as possible.
This quality for reasoning efficiency, becomes a deficiency for accurate communication and meaning preservation.
Overcoming it may require either interactive feedback or preservation of the source context.
Work from social science and humanities may help solving this particular problem.
Paper – Incremental Temporal Logic Synthesis of Control Policies for Robots Interacting with Dynamic Agents
Today I read a paper titled “Incremental Temporal Logic Synthesis of Control Policies for Robots Interacting with Dynamic Agents”
The abstract is:
We consider the synthesis of control policies from temporal logic specifications for robots that interact with multiple dynamic environment agents.
Each environment agent is modeled by a Markov chain whereas the robot is modeled by a finite transition system (in the deterministic case) or Markov decision process (in the stochastic case).
Existing results in probabilistic verification are adapted to solve the synthesis problem.
To partially address the state explosion issue, we propose an incremental approach where only a small subset of environment agents is incorporated in the synthesis procedure initially and more agents are successively added until we hit the constraints on computational resources.
Our algorithm runs in an anytime fashion where the probability that the robot satisfies its specification increases as the algorithm progresses.
Why I don’t support my OpenSource projects
I believe strongly in supporting the products I make and sell.
Strong support is a revenue generating feature as far as I am concerned, either through support licenses or because people, and developers and managers especially, will perceive good support as a value add to any product they use.
But I don’t offer support on my giveaway OpenSource projects these days beyond a cursory “I’ll fix it when I get to it” philosophy.
What I realised a few years ago was that most of my time was being sucked up, for free, by individuals and companies requesting fixes to obscure bugs that affected one in five thousand developers (literally!) and features that would only be useful to a handful of people.
Yes, I admit it sucks when an obscure bug affects your day-to-day work and the developer won’t fix it, and even though the source code is available for the taking you don’t have time to fix it yourself.
Yes, I admit it sucks when a developer puts out a project and doesn’t update it for years and support for the latest model hardware falls behind.
I have over 20 OpenSource projects of various types I have personally developed over the years, which are now all hosted at http://code.otakunozoku.com/.
I don’t directly offer support on any of them.
If someone reports a bug or desires a particular feature, I’ll add it to the list of things to do.
And that is about all I will do with someone’s urgent request.
I don’t prioritise any particular task based on a request or report. I have found that this is the only way to stay sane in a world where every person you interact with believes that your TO DO list should be publically accessible and writeable.
The best productivity tool I have developed to date is the word “No.”
In each README of my projects there is now an explicit “No support provided” line that clearly sets the expectation for people who download the software or source code.
I have interacted with a few people who still expect support, even demand it in a few cases, but generally I think it has had a net positive effect on people’s expectations.
Paper – Why aren’t the small worlds of protein contact networks smaller
Today I read a paper titled “Why aren’t the small worlds of protein contact networks smaller”
The abstract is:
Computer experiments are performed to investigate why protein contact networks (networks induced by spatial contacts between amino acid residues of a protein) do not have shorter average shortest path lengths in spite of their importance to protein folding.
We find that shorter average inter-nodal distances is no guarantee of finding a global optimum more easily.
Results from the experiments also led to observations which parallel an existing view that neither short-range nor long-range interactions dominate the protein folding process.
Nonetheless, runs where there was a slight delay in the use of long-range interactions yielded the best search performance.
We incorporate this finding into the optimization function by giving more weight to short-range links.
This produced results showing that randomizing long-range links does not yield better search performance than protein contact networks au natural even though randomizing long-range links significantly reduces average path lengths and retains much of the clustering and positive degree-degree correlation inherent in protein contact networks.
Hence there can be explanations, other than the excluded volume argument, beneath the topological limits of protein contact networks.
Talking at vs talking to
Anybody who believes that voice controlled user interfaces are the future has never had to try and use Siri or Google to get directions with my future mother-in-law sat in the passenger seat nattering away about nothing at all.
It is nigh on impossible to actually ask for directions and she (my future Mother-in-law, not Siri) is in the habit of answering for the phone.
“You know your car’s navigation system is very rude. It rudely interrupts me when I am talking to you.”
Paper – A statistical learning approach to color demosaicing
Today I read a paper titled “A statistical learning approach to color demosaicing”
The abstract is:
A statistical learning/inference framework for color demosaicing is presented.
We start with simplistic assumptions about color constancy, and recast color demosaicing as a blind linear inverse problem: color parameterizes the unknown kernel, while brightness takes on the role of a latent variable.
An expectation-maximization algorithm naturally suggests itself for the estimation of them both.
Then, as we gradually broaden the family of hypothesis where color is learned, we let our demosaicing behave adaptively, in a manner that reflects our prior knowledge about the statistics of color images.
We show that we can incorporate realistic, learned priors without essentially changing the complexity of the simple expectation-maximization algorithm we started with.
Paper – Wavefront Propagation and Fuzzy Based Autonomous Navigation
Today I read a paper titled “Wavefront Propagation and Fuzzy Based Autonomous Navigation”
The abstract is:
Path planning and obstacle avoidance are the two major issues in any navigation system.
Wavefront propagation algorithm, as a good path planner, can be used to determine an optimal path.
Obstacle avoidance can be achieved using possibility theory.
Combining these two functions enable a robot to autonomously navigate to its destination.
This paper presents the approach and results in implementing an autonomous navigation system for an indoor mobile robot.
The system developed is based on a laser sensor used to retrieve data to update a two dimensional world model of therobot environment.
Waypoints in the path are incorporated into the obstacle avoidance.
Features such as ageing of objects and smooth motion planning are implemented to enhance efficiency and also to cater for dynamic environments.
Listening – Let England Shake
This week I am listening to “Let England Shake” by P.J. Harvey
Read – Becoming An Authority
Today I finished reading “Becoming An Authority: Stop Waiting For Permission To Dominate Your Niche And Start Making A Difference” by Rebekah Welch
Read – Feynman Lectures On Gravitation
Today I finished reading “Feynman Lectures On Gravitation” by Richard Feynman
Read – Shelter Stories
Today I finished reading “Shelter Stories” by Patrick McDonnell
Paper – Efficiently Learning a Detection Cascade with Sparse Eigenvectors
Today I read a paper titled “Efficiently Learning a Detection Cascade with Sparse Eigenvectors”
The abstract is:
In this work, we first show that feature selection methods other than boosting can also be used for training an efficient object detector.
In particular, we introduce Greedy Sparse Linear Discriminant Analysis (GSLDA) \cite{Moghaddam2007Fast} for its conceptual simplicity and computational efficiency; and slightly better detection performance is achieved compared with \cite{Viola2004Robust}.
Moreover, we propose a new technique, termed Boosted Greedy Sparse Linear Discriminant Analysis (BGSLDA), to efficiently train a detection cascade.
BGSLDA exploits the sample re-weighting property of boosting and the class-separability criterion of GSLDA.