Today I finished reading “Meet Mr. Mulliner” by P.G. Wodehouse
Read – The Christmas Day Kitten
Today I finished reading “The Christmas Day Kitten” by James Herriot
Read – Pieces 2: Phantom Cats
Today I finished reading “Pieces 2: Phantom Cats” by Masamune Shirow
Read – The Oxford Book of Modern Science Writing
Today I finished reading “The Oxford Book of Modern Science Writing” by Richard Dawkins
Read – All About Jeeves
Today I finished reading “All About Jeeves” by P.G. Wodehouse
Read – Pieces 9: Kokon Otogizoshi Shu Hiden
Today I finished reading “Pieces 9: Kokon Otogizoshi Shu Hiden” by Masamune Shirow
Paper – Obstacle evasion using fuzzy logic in a sliding blades problem environment
Today I read a paper titled “Obstacle evasion using fuzzy logic in a sliding blades problem environment”
The abstract is:
This paper discusses obstacle avoidance using fuzzy logic and shortest path algorithm.
This paper also introduces the sliding blades problem and illustrates how a drone can navigate itself through the swinging blade obstacles while tracing a semi-optimal path and also maintaining constant velocity .
Read – When Crocs Fly
Today I finished reading “When Crocs Fly: A Pearls Before Swine Collection” by Stephan Pastis
Read – Friends Should Know When They’re Not Wanted
Today I finished reading “Friends Should Know When They’re Not Wanted: A Sociopath’s Guide to Friendship” by Stephan Pastis
Paper – Remote Health Coaching System and Human Motion Data Analysis for Physical Therapy with Microsoft Kinect
Today I read a paper titled “Remote Health Coaching System and Human Motion Data Analysis for Physical Therapy with Microsoft Kinect”
The abstract is:
This paper summarizes the recent progress we have made for the computer vision technologies in physical therapy with the accessible and affordable devices.
We first introduce the remote health coaching system we build with Microsoft Kinect.
Since the motion data captured by Kinect is noisy, we investigate the data accuracy of Kinect with respect to the high accuracy motion capture system.
We also propose an outlier data removal algorithm based on the data distribution.
In order to generate the kinematic parameter from the noisy data captured by Kinect, we propose a kinematic filtering algorithm based on Unscented Kalman Filter and the kinematic model of human skeleton.
The proposed algorithm can obtain smooth kinematic parameter with reduced noise compared to the kinematic parameter generated from the raw motion data from Kinect.
Read – A Prefect’s Uncle
Today I finished reading “A Prefect’s Uncle” by P.G. Wodehouse
Studying – 3DS Max
This week I am studying “3DS Max”
I use 3DS Max quite regularly, and have been using it since it was just 3DS Studio back in the mid-90’s.
However, I am much more facile with Maya and SoftImage and have never sat down and just got to know 3DS Max inside-and-out.
Looking to fix that oversight this month with this extensive workshop from Gnomon.
Paper – On the Computation of the Optimal Connecting Points in Road Networks
Today I read a paper titled “On the Computation of the Optimal Connecting Points in Road Networks”
The abstract is:
In this paper we consider a set of travelers, starting from likely different locations towards a common destination within a road network, and propose solutions to find the optimal connecting points for them.
A connecting point is a vertex of the network where a subset of the travelers meet and continue traveling together towards the next connecting point or the destination.
The notion of optimality is with regard to a given aggregated travel cost, e.g., travel distance or shared fuel cost.
This problem by itself is new and we make it even more interesting (and complex) by considering affinity factors among the users, i.e., how much a user likes to travel together with another one.
This plays a fundamental role in determining where the connecting points are and how subsets of travelers are formed.
We propose three methods for addressing this problem, one that relies on a fast and greedy approach that finds a sub-optimal solution, and two others that yield globally optimal solution.
We evaluate all proposed approaches through experiments, where collections of real datasets are used to assess the trade-offs, behavior and characteristics of each method.
Read – Tanequil
Today I finished reading “Tanequil” by Terry Brooks
Paper – Genetic cellular neural networks for generating three-dimensional geometry
Today I read a paper titled “Genetic cellular neural networks for generating three-dimensional geometry”
The abstract is:
There are a number of ways to procedurally generate interesting three-dimensional shapes, and a method where a cellular neural network is combined with a mesh growth algorithm is presented here.
The aim is to create a shape from a genetic code in such a way that a crude search can find interesting shapes.
Identical neural networks are placed at each vertex of a mesh which can communicate with neural networks on neighboring vertices.
The output of the neural networks determine how the mesh grows, allowing interesting shapes to be produced emergently, mimicking some of the complexity of biological organism development.
Since the neural networks’ parameters can be freely mutated, the approach is amenable for use in a genetic algorithm.
Paper – Recovering 6D Object Pose and Predicting Next-Best-View in the Crowd
Today I read a paper titled “Recovering 6D Object Pose and Predicting Next-Best-View in the Crowd”
The abstract is:
Object detection and 6D pose estimation in the crowd (scenes with multiple object instances, severe foreground occlusions and background distractors), has become an important problem in many rapidly evolving technological areas such as robotics and augmented reality.
Single shot-based 6D pose estimators with manually designed features are still unable to tackle the above challenges, motivating the research towards unsupervised feature learning and next-best-view estimation.
In this work, we present a complete framework for both single shot-based 6D object pose estimation and next-best-view prediction based on Hough Forests, the state of the art object pose estimator that performs classification and regression jointly.
Rather than using manually designed features we a) propose an unsupervised feature learnt from depth-invariant patches using a Sparse Autoencoder and b) offer an extensive evaluation of various state of the art features.
Furthermore, taking advantage of the clustering performed in the leaf nodes of Hough Forests, we learn to estimate the reduction of uncertainty in other views, formulating the problem of selecting the next-best-view.
To further improve pose estimation, we propose an improved joint registration and hypotheses verification module as a final refinement step to reject false detections.
We provide two additional challenging datasets inspired from realistic scenarios to extensively evaluate the state of the art and our framework.
One is related to domestic environments and the other depicts a bin-picking scenario mostly found in industrial settings.
We show that our framework significantly outperforms state of the art both on public and on our datasets.
Read – The Hacker and the Ants
Today I finished reading “The Hacker and the Ants” by Rudy Rucker
Chicken Soup for the Soulvaki
According to the San Jose Convention Center a dish such as Chicken Soulvaki consists of…
Iceberg lettuce.
Raw onions
Cilantro
Lots and lots and lots of cilantro.
And some steamed chicken.
No yoghurt and cucumber dip.
No pita bread.
It is a bit like describing the music of Mozart in the written word.
Nothing conveys the experience of the music except the music itself.
Paper – High statistics measurements of pedestrian dynamics
Today I read a paper titled “High statistics measurements of pedestrian dynamics”
The abstract is:
Understanding the complex behavior of pedestrians walking in crowds is a challenge for both science and technology.
In particular, obtaining reliable models for crowd dynamics, capable of exhibiting qualitatively and quantitatively the observed emergent features of pedestrian flows, may have a remarkable impact for matters as security, comfort and structural serviceability.
Aiming at a quantitative understanding of basic aspects of pedestrian dynamics, extensive and high-accuracy measurements of pedestrian trajectories have been performed.
More than 100.000 real-life, time-resolved trajectories of people walking along a trafficked corridor in a building of the Eindhoven University of Technology, The Netherlands, have been recorded.
A measurement strategy based on Microsoft Kinect\texttrademark has been used; the trajectories of pedestrians have been analyzed as ensemble data.
The main result consists of a statistical descriptions of pedestrian characteristic kinematic quantities such as positions and fundamental diagrams, possibly conditioned to local crowding status (e.g., one or more pedestrian(s) walking, presence of co-flows and counter-flows).
Read – The Prince and Betty
Today I finished reading “The Prince and Betty” by P.G. Wodehouse
Read – Twenty-Odd Ducks
Today I finished reading “Twenty-Odd Ducks: Why, Every Punctuation Mark Counts!” by Lynne Truss
Read – The Science of Discworld IV: Judgement Day
Today I finished reading “The Science of Discworld IV: Judgement Day” by Terry Pratchett
Paper – Joint Belief and Intent Prediction for Collision Avoidance in Autonomous Vehicles
Today I read a paper titled “Joint Belief and Intent Prediction for Collision Avoidance in Autonomous Vehicles”
The abstract is:
This paper describes a novel method for allowing an autonomous ground vehicle to predict the intent of other agents in an urban environment.
This method, termed the cognitive driving framework, models both the intent and the potentially false beliefs of an obstacle vehicle.
By modeling the relationships between these variables as a dynamic Bayesian network, filtering can be performed to calculate the intent of the obstacle vehicle as well as its belief about the environment.
This joint knowledge can be exploited to plan safer and more efficient trajectories when navigating in an urban environment.
Simulation results are presented that demonstrate the ability of the proposed method to calculate the intent of obstacle vehicles as an autonomous vehicle navigates a road intersection such that preventative maneuvers can be taken to avoid imminent collisions.
Paper – Control of Memory, Active Perception, and Action in Minecraft
Today I read a paper titled “Control of Memory, Active Perception, and Action in Minecraft”
The abstract is:
In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world).
We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures.
These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability (due to first-person visual observations), delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks.
While these tasks are conceptually simple to describe, by virtue of having all of these challenges simultaneously they are difficult for current DRL architectures.
Additionally, we evaluate the generalization performance of the architectures on environments not used during training.
The experimental results show that our new architectures generalize to unseen environments better than existing DRL architectures.
Studying – After Effects Guru Training
This month I am studying “After Effects Guru Training”
A couple of weekend workshops on using After Effects and really getting to know the in’s and out’s.
Whilst I don’t like the idea of having to show up at a specific location at a specific time to get knowledge I am actually hoping that the face-to-face interaction with the instructor and other After Effects users will accelerate my learning.
Update: Nope. Most of the people who are taking this class have never touched After Effects until they walked in the class room.
Update #2: And the instructor is very slow at imparting information. Good to ensure people got through the exercises but frustrating for those of us who already knew After Effects reasonably well.
Update #3: I wasn’t the only who chafed at the slow pace.
Read – Young Men in Spats
Today I finished reading “Young Men in Spats” by P.G. Wodehouse
Read – Unweaving the Rainbow
Today I finished reading “Unweaving the Rainbow: Science, Delusion and the Appetite for Wonder” by Richard Dawkins
Read – Pandora in the Crimson Shell: Ghost Urn Vol. 5
Today I finished reading “Pandora in the Crimson Shell: Ghost Urn Vol. 5” by Masamune Shirow
Paper – Determining the best attributes for surveillance video keywords generation
Today I read a paper titled “Determining the best attributes for surveillance video keywords generation”
The abstract is:
Automatic video keyword generation is one of the key ingredients in reducing the burden of security officers in analyzing surveillance videos.
Keywords or attributes are generally chosen manually based on expert knowledge of surveillance.
Most existing works primarily aim at either supervised learning approaches relying on extensive manual labelling or hierarchical probabilistic models that assume the features are extracted using the bag-of-words approach; thus limiting the utilization of the other features.
To address this, we turn our attention to automatic attribute discovery approaches.
However, it is not clear which automatic discovery approach can discover the most meaningful attributes.
Furthermore, little research has been done on how to compare and choose the best automatic attribute discovery methods.
In this paper, we propose a novel approach, based on the shared structure exhibited amongst meaningful attributes, that enables us to compare between different automatic attribute discovery approaches.We then validate our approach by comparing various attribute discovery methods such as PiCoDeS on two attribute datasets.
The evaluation shows that our approach is able to select the automatic discovery approach that discovers the most meaningful attributes.
We then employ the best discovery approach to generate keywords for videos recorded from a surveillance system.
This work shows it is possible to massively reduce the amount of manual work in generating video keywords without limiting ourselves to a particular video feature descriptor.
Read – Empowered Special #5: Nine Beers with Ninjette
Today I finished reading “Empowered Special #5: Nine Beers with Ninjette” by Adam Warren
Living in the Bayesian area
Years past we used to use simple, naive keyword filtering to identify junk email as spam.
But the spammers, driven by profits, got more sophisticated, and software developers came up with fancy Bayesian filtering to solve all that. Clever packages that run at your server to determine spam, massive distributed systems with datacenters to identify patterns, and often worse, paid for, solutions (Norton, Avast, et al) than the problem itself.
But…
Most spammers are
You see, they want to spam, but they also want to appear legitimate. Or at least, just legitimate enough.
Spammers want to offer a way for you to unsubscribe from their crap so they still comply with the law they don’t care about just enough to be able to say “but we gave people a way out!” Each and every spammer, in their cleverness, provides an unsubscribe link.
And everybody knows you shouldn’t click on those unsubscribe links because it will just invite more spam.
But if you add “unsubscribe” to your keyword email filter, 99.9% of all spam suddenly dries up. And if there is a newsletter you legitimately want, it is easy to add to your whitelist.
And so we have come full circle. Back to simple keyword filtering for handling our spam.
Read – Naked Words 2.0
Today I finished reading “Naked Words 2.0: The Effective 157-Word Email” by Gisela Hausmann
Read – The Shepherd’s Crown
Today I finished reading “The Shepherd’s Crown” by Terry Pratchett
Read – Slow Apocalypse
Today I finished reading “Slow Apocalypse” by John Varley
Paper – Feature Lines for Illustrating Medical Surface Models: Mathematical Background and Survey
Today I read a paper titled “Feature Lines for Illustrating Medical Surface Models: Mathematical Background and Survey”
The abstract is:
This paper provides a tutorial and survey for a specific kind of illustrative visualization technique: feature lines.
We examine different feature line methods.
For this, we provide the differential geometry behind these concepts and adapt this mathematical field to the discrete differential geometry.
All discrete differential geometry terms are explained for triangulated surface meshes.
These utilities serve as basis for the feature line methods.
We provide the reader with all knowledge to re-implement every feature line method.
Furthermore, we summarize the methods and suggest a guideline for which kind of surface which feature line algorithm is best suited.
Our work is motivated by, but not restricted to, medical and biological surface models.
Read – The Pusher
Today I finished reading “The Pusher” by John Varley
Read – America, I Like You
Today I finished reading “America, I Like You” by P.G. Wodehouse
Paper – The Information-Collecting Vehicle Routing Problem: Stochastic Optimization for Emergency Storm Response
Today I read a paper titled “The Information-Collecting Vehicle Routing Problem: Stochastic Optimization for Emergency Storm Response”
The abstract is:
Utilities face the challenge of responding to power outages due to storms and ice damage, but most power grids are not equipped with sensors to pinpoint the precise location of the faults causing the outage.
Instead, utilities have to depend primarily on phone calls (trouble calls) from customers who have lost power to guide the dispatching of utility trucks.
In this paper, we develop a policy that routes a utility truck to restore outages in the power grid as quickly as possible, using phone calls to create beliefs about outages, but also using utility trucks as a mechanism for collecting additional information.
This means that routing decisions change not only the physical state of the truck (as it moves from one location to another) and the grid (as the truck performs repairs), but also our belief about the network, creating the first stochastic vehicle routing problem that explicitly models information collection and belief modeling.
We address the problem of managing a single utility truck, which we start by formulating as a sequential stochastic optimization model which captures our belief about the state of the grid.
We propose a stochastic lookahead policy, and use Monte Carlo tree search (MCTS) to produce a practical policy that is asymptotically optimal.
Simulation results show that the developed policy restores the power grid much faster compared to standard industry heuristics.
Paper – Relating Cascaded Random Forests to Deep Convolutional Neural Networks for Semantic Segmentation
Today I read a paper titled “Relating Cascaded Random Forests to Deep Convolutional Neural Networks for Semantic Segmentation”
The abstract is:
We consider the task of pixel-wise semantic segmentation given a small set of labeled training images.
Among two of the most popular techniques to address this task are Random Forests (RF) and Neural Networks (NN).
The main contribution of this work is to explore the relationship between two special forms of these techniques: stacked RFs and deep Convolutional Neural Networks (CNN).
We show that there exists a mapping from stacked RF to deep CNN, and an approximate mapping back.
This insight gives two major practical benefits: Firstly, deep CNNs can be intelligently constructed and initialized, which is crucial when dealing with a limited amount of training data.
Secondly, it can be utilized to create a new stacked RF with improved performance.
Furthermore, this mapping yields a new CNN architecture, that is well suited for pixel-wise semantic labeling.
We experimentally verify these practical benefits for two different application scenarios in computer vision and biology, where the layout of parts is important: Kinect-based body part labeling from depth images, and somite segmentation in microscopy images of developing zebrafish.
Paper – Where’s My Drink? Enabling Peripheral Real World Interactions While Using HMDs
Today I read a paper titled “Where’s My Drink? Enabling Peripheral Real World Interactions While Using HMDs”
The abstract is:
Head Mounted Displays (HMDs) allow users to experience virtual reality with a great level of immersion.
However, even simple physical tasks like drinking a beverage can be difficult and awkward while in a virtual reality experience.
We explore mixed reality renderings that selectively incorporate the physical world into the virtual world for interactions with physical objects.
We conducted a user study comparing four rendering techniques that balances immersion in a virtual world with ease of interaction with the physical world.
Finally, we discuss the pros and cons of each approach, suggesting guidelines for future rendering techniques that bring physical objects into virtual reality.
Read – The Sales Bible
Today I finished reading “The Sales Bible: The Ultimate Sales Resource” by Jeffrey Gitomer
Studying – Integrating type in to videos with After Effects
This month I am studying “Integrating type in to videos with After Effects”
Short video course and exercise files on how to do various type effects in After Effects.
Update: Pretty basic stuff. Not sure it was worth my time. Wrapped everything up inside of a week. Think I will just spend the rest of the month practicing my figure and landscape drawing.
Paper – Usability Engineering of Games: A Comparative Analysis of Measuring Excitement Using Sensors, Direct Observations and Self-Reported Data
Today I read a paper titled “Usability Engineering of Games: A Comparative Analysis of Measuring Excitement Using Sensors, Direct Observations and Self-Reported Data”
The abstract is:
Usability engineering and usability testing are concepts that continue to evolve.
Interesting research studies and new ideas come up every now and then.
This paper tests the hypothesis of using an EDA based physiological measurements as a usability testing tool by considering three measures which are observers opinions, self reported data and EDA based physiological sensor data.
These data were analyzed comparatively and statistically.
It concludes by discussing the findings that has been obtained from those subjective and objective measures, which partially supports the hypothesis.
Read – Tales from the Drones Club
Today I finished reading “Tales from the Drones Club” by P.G. Wodehouse
Paper – Detecting and avoiding frontal obstacles from monocular camera for micro unmanned aerial vehicles
Today I read a paper titled “Detecting and avoiding frontal obstacles from monocular camera for micro unmanned aerial vehicles”
The abstract is:
In literature, several approaches are trying to make the UAVs fly autonomously i.e., by extracting perspective cues such as straight lines.
However, it is only available in well-defined human made environments, in addition to many other cues which require enough texture information.
Our main target is to detect and avoid frontal obstacles from a monocular camera using a quad rotor Ar.Drone 2 by exploiting optical flow as a motion parallax, the drone is permitted to fly at a speed of 1 m/s and an altitude ranging from 1 to 4 meters above the ground level.
In general, detecting and avoiding frontal obstacle is a quite challenging problem because optical flow has some limitation which should be taken into account i.e.
lighting conditions and aperture problem.
Paper – Unscented Bayesian Optimization for Safe Robot Grasping
Today I read a paper titled “Unscented Bayesian Optimization for Safe Robot Grasping”
The abstract is:
We address the robot grasp optimization problem of unknown objects considering uncertainty in the input space.
Grasping unknown objects can be achieved by using a trial and error exploration strategy.
Bayesian optimization is a sample efficient optimization algorithm that is especially suitable for this setups as it actively reduces the number of trials for learning about the function to optimize.
In fact, this active object exploration is the same strategy that infants do to learn optimal grasps.
One problem that arises while learning grasping policies is that some configurations of grasp parameters may be very sensitive to error in the relative pose between the object and robot end-effector.
We call these configurations unsafe because small errors during grasp execution may turn good grasps into bad grasps.
Therefore, to reduce the risk of grasp failure, grasps should be planned in safe areas.
We propose a new algorithm, Unscented Bayesian optimization that is able to perform sample efficient optimization while taking into consideration input noise to find safe optima.
The contribution of Unscented Bayesian optimization is twofold as if provides a new decision process that drives exploration to safe regions and a new selection procedure that chooses the optimal in terms of its safety without extra analysis or computational cost.
Both contributions are rooted on the strong theory behind the unscented transformation, a popular nonlinear approximation method.
We show its advantages with respect to the classical Bayesian optimization both in synthetic problems and in realistic robot grasp simulations.
The results highlights that our method achieves optimal and robust grasping policies after few trials while the selected grasps remain in safe regions.
Paper – Some Experimental Issues in Financial Fraud Detection: An Investigation
Today I read a paper titled “Some Experimental Issues in Financial Fraud Detection: An Investigation”
The abstract is:
Financial fraud detection is an important problem with a number of design aspects to consider.
Issues such as algorithm selection and performance analysis will affect the perceived ability of proposed solutions, so for auditors and re-searchers to be able to sufficiently detect financial fraud it is necessary that these issues be thoroughly explored.
In this paper we will revisit the key performance metrics used for financial fraud detection with a focus on credit card fraud, critiquing the prevailing ideas and offering our own understandings.
There are many different performance metrics that have been employed in prior financial fraud detection research.
We will analyse several of the popular metrics and compare their effectiveness at measuring the ability of detection mechanisms.
We further investigated the performance of a range of computational intelligence techniques when applied to this problem domain, and explored the efficacy of several binary classification methods.
Read – Uncle Fred Flits by
Today I finished reading “Uncle Fred Flits by” by P.G. Wodehouse
Read – Mathematicians in Love
Today I finished reading “Mathematicians in Love” by Rudy Rucker
Paper – Data Driven Robust Image Guided Depth Map Restoration
Today I read a paper titled “Data Driven Robust Image Guided Depth Map Restoration”
The abstract is:
Depth maps captured by modern depth cameras such as Kinect and Time-of-Flight (ToF) are usually contaminated by missing data, noises and suffer from being of low resolution.
In this paper, we present a robust method for high-quality restoration of a degraded depth map with the guidance of the corresponding color image.
We solve the problem in an energy optimization framework that consists of a novel robust data term and smoothness term.
To accommodate not only the noise but also the inconsistency between depth discontinuities and the color edges, we model both the data term and smoothness term with a robust exponential error norm function.
We propose to use Iteratively Re-weighted Least Squares (IRLS) methods for efficiently solving the resulting highly non-convex optimization problem.
More importantly, we further develop a data-driven adaptive parameter selection scheme to properly determine the parameter in the model.
We show that the proposed approach can preserve fine details and sharp depth discontinuities even for a large upsampling factor ($8\times$ for example).
Experimental results on both simulated and real datasets demonstrate that the proposed method outperforms recent state-of-the-art methods in coping with the heavy noise, preserving sharp depth discontinuities and suppressing the texture copy artifacts.