Today I finished reading “Fact. Fact. Bullsh*t!: Learn the Truth and Spot the Lie on Everything from Tequila-Made Diamonds to Tetris’s Soviet Roots – Plus Tons of Other Totally Random Facts from Science, History and Beyond!” by Neil Patrick Stewart
Paper – Fast keypoint detection in video sequences
Today I read a paper titled “Fast keypoint detection in video sequences”
The abstract is:
A number of computer vision tasks exploit a succinct representation of the visual content in the form of sets of local features.
Given an input image, feature extraction algorithms identify a set of keypoints and assign to each of them a description vector, based on the characteristics of the visual content surrounding the interest point.
Several tasks might require local features to be extracted from a video sequence, on a frame-by-frame basis.
Although temporal downsampling has been proven to be an effective solution for mobile augmented reality and visual search, high temporal resolution is a key requirement for time-critical applications such as object tracking, event recognition, pedestrian detection, surveillance.
In recent years, more and more computationally efficient visual feature detectors and decriptors have been proposed.
Nonetheless, such approaches are tailored to still images.
In this paper we propose a fast keypoint detection algorithm for video sequences, that exploits the temporal coherence of the sequence of keypoints.
According to the proposed method, each frame is preprocessed so as to identify the parts of the input frame for which keypoint detection and description need to be performed.
Our experiments show that it is possible to achieve a reduction in computational time of up to 40%, without significantly affecting the task accuracy.
Read – Wodehouse Is The Best Medicine
Today I finished reading “Wodehouse Is The Best Medicine” by P.G. Wodehouse
Paper – Unsupervised Learning in Neuromemristive Systems
Today I read a paper titled “Unsupervised Learning in Neuromemristive Systems”
The abstract is:
Neuromemristive systems (NMSs) currently represent the most promising platform to achieve energy efficient neuro-inspired computation.
However, since the research field is less than a decade old, there are still countless algorithms and design paradigms to be explored within these systems.
One particular domain that remains to be fully investigated within NMSs is unsupervised learning.
In this work, we explore the design of an NMS for unsupervised clustering, which is a critical element of several machine learning algorithms.
Using a simple memristor crossbar architecture and learning rule, we are able to achieve performance which is on par with MATLAB’s k-means clustering.
Paper – The Bees Algorithm for the Vehicle Routing Problem
Today I read a paper titled “The Bees Algorithm for the Vehicle Routing Problem”
The abstract is:
In this thesis we present a new algorithm for the Vehicle Routing Problem called the Enhanced Bees Algorithm.
It is adapted from a fairly recent algorithm, the Bees Algorithm, which was developed for continuous optimisation problems.
We show that the results obtained by the Enhanced Bees Algorithm are competitive with the best meta-heuristics available for the Vehicle Routing Problem (within 0.5% of the optimal solution for common benchmark problems).
We show that the algorithm has good runtime performance, producing results within 2% of the optimal solution within 60 seconds, making it suitable for use within real world dispatch scenarios.
Paper – Automatic Face Reenactment
Today I read a paper titled “Automatic Face Reenactment”
The abstract is:
We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target performance.
Our system is fully automatic and does not require a database of source expressions.
Instead, it is able to produce convincing reenactment results from a short source video captured with an off-the-shelf camera, such as a webcam, where the user performs arbitrary facial gestures.
Our reenactment pipeline is conceived as part image retrieval and part face transfer: The image retrieval is based on temporal clustering of target frames and a novel image matching metric that combines appearance and motion to select candidate frames from the source video, while the face transfer uses a 2D warping strategy that preserves the user’s identity.
Our system excels in simplicity as it does not rely on a 3D face model, it is robust under head motion and does not require the source and target performance to be similar.
We show convincing reenactment results for videos that we recorded ourselves and for low-quality footage taken from the Internet.
Studying – Baking Advanced pastry techniques
This month I am studying “Baking – Advanced pastry techniques”
The 2nd month of my advanced pastry techniques
Update: That… was hard work. And fun.
Paper – A Novel Human Computer Interaction Platform based College Mathematical Education Methodology
Today I read a paper titled “A Novel Human Computer Interaction Platform based College Mathematical Education Methodology”
The abstract is:
This article proposes the analysis on novel human computer interaction (HCI) platform based college mathematical education methodology.
Above for the application of virtual reality technology in teaching the problems in the study, only through the organization focus on the professional and technical personnel, and constantly improve researchers in development process of professional knowledge, close to the actual needs of the teaching can we achieve the satisfactory result.
To obtain better education output, we combine the Kinect to form the HCI based teaching environment.
We firstly review the latest HCI technique and principles of college math courses, then we introduce basic components of the Kinect including the gesture segmentation, systematic implementation and the primary characteristics of the platform.
As the further step, we implement the system with the re-write of script code to build up the personalized HCI assisted education scenario.
The verification and simulation proves the feasibility of our method.
Read – The Snowball
Today I finished reading “The Snowball: Warren Buffett and the Business of Life” by Alice Schroeder
Paper – Heuristics for Planning, Plan Recognition and Parsing
Today I read a paper titled “Heuristics for Planning, Plan Recognition and Parsing”
The abstract is:
In a recent paper, we have shown that Plan Recognition over STRIPS can be formulated and solved using Classical Planning heuristics and algorithms.
In this work, we show that this formulation subsumes the standard formulation of Plan Recognition over libraries through a compilation of libraries into STRIPS theories.
The libraries correspond to AND/OR graphs that may be cyclic and where children of AND nodes may be partially ordered.
These libraries include Context-Free Grammars as a special case, where the Plan Recognition problem becomes a parsing with missing tokens problem.
Plan Recognition over the standard libraries become Planning problems that can be easily solved by any modern planner, while recognition over more complex libraries, including Context-Free Grammars (CFGs), illustrate limitations of current Planning heuristics and suggest improvements that may be relevant in other Planning problems too.
Read – Frek and the Elixir
Today I finished reading “Frek and the Elixir” by Rudy Rucker
Read – The Heart of a Goof
Today I finished reading “The Heart of a Goof” by P.G. Wodehouse
Read – James Herriot’s Animal Stories
Today I finished reading “James Herriot’s Animal Stories” by James Herriot
Read – Uncle Dynamite
Today I finished reading “Uncle Dynamite” by P.G. Wodehouse
Read – Pearls, Girls And Monty Bodkin
Today I finished reading “Pearls, Girls And Monty Bodkin” by P.G. Wodehouse
Read – The Guild: Tink #2
Today I finished reading “The Guild: Tink #2” by Felicia Day
Studying – Baking Advanced pastry techniques
This month I am studying “Baking – Advanced pastry techniques”
The 1st month of advanced pastry techniques.
There’s a two month (four nights a week) class at the local pastry school.
And you think I am going to pass that up?
Update: Advanced means advanced and some students do not fucking understand what the word “advanced” actually fucking means.
Read – Usagi Yojimbo, Vol. 29: Two Hundred Jizo
Today I finished reading “Usagi Yojimbo, Vol. 29: Two Hundred Jizo” by Stan Sakai
Read – The Measure of the Magic
Today I finished reading “The Measure of the Magic” by Terry Brooks
Read – A Sociopath’s Guide to Friendship
Today I finished reading “A Sociopath’s Guide to Friendship” by Stephan Pastis
Paper – Probably Approximately Correct Greedy Maximization
Today I read a paper titled “Probably Approximately Correct Greedy Maximization”
The abstract is:
Submodular function maximization finds application in a variety of real-world decision-making problems.
However, most existing methods, based on greedy maximization, assume it is computationally feasible to evaluate F, the function being maximized.
Unfortunately, in many realistic settings F is too expensive to evaluate exactly even once.
We present probably approximately correct greedy maximization, which requires access only to cheap anytime confidence bounds on F and uses them to prune elements.
We show that, with high probability, our method returns an approximately optimal set.
We propose novel, cheap confidence bounds for conditional entropy, which appears in many common choices of F and for which it is difficult to find unbiased or bounded estimates.
Finally, results on a real-world dataset from a multi-camera tracking system in a shopping mall demonstrate that our approach performs comparably to existing methods, but at a fraction of the computational cost.
Read – Conan Volume 19: Xuthal of the Dusk
Today I finished reading “Conan Volume 19: Xuthal of the Dusk” by Fred Van Lente
Paper – An Online Mechanism for Ridesharing in Autonomous Mobility-on-Demand Systems
Today I read a paper titled “An Online Mechanism for Ridesharing in Autonomous Mobility-on-Demand Systems”
The abstract is:
With proper management, Autonomous Mobility-on-Demand (AMoD) systems have great potential to satisfy the transport demands of urban populations by providing safe, convenient, and affordable ridesharing services.
Meanwhile, such systems can substantially decrease private car ownership and use, and thus significantly reduce traffic congestion, energy consumption, and carbon emissions.
To achieve this objective, an AMoD system requires private information about the demand from passengers.
However, due to self-interestedness, passengers are unlikely to cooperate with the service providers in this regard.
Therefore, an online mechanism is desirable if it incentivizes passengers to truthfully report their actual demand.
For the purpose of promoting ridesharing, we hereby introduce a posted-price, integrated online ridesharing mechanism (IORS) that satisfies desirable properties such as ex-post incentive compatibility, individual rationality, and budget-balance.
Numerical results indicate the competitiveness of IORS compared with two benchmarks, namely the optimal assignment and an offline, auction-based mechanism.
Read – The Gem Collector
Today I finished reading “The Gem Collector” by P.G. Wodehouse
Paper – To Know Where We Are: Vision-Based Positioning in Outdoor Environments
Today I read a paper titled “To Know Where We Are: Vision-Based Positioning in Outdoor Environments”
The abstract is:
Augmented reality (AR) displays become more and more popular recently, because of its high intuitiveness for humans and high-quality head-mounted display have rapidly developed.
To achieve such displays with augmented information, highly accurate image registration or ego-positioning are required, but little attention have been paid for out-door environments.
This paper presents a method for ego-positioning in outdoor environments with low cost monocular cameras.
To reduce the computational and memory requirements as well as the communication overheads, we formulate the model compression algorithm as a weighted k-cover problem for better preserving model structures.
Specifically for real-world vision-based positioning applications, we consider the issues with large scene change and propose a model update algorithm to tackle these problems.
A long- term positioning dataset with more than one month, 106 sessions, and 14,275 images is constructed.
Based on both local and up-to-date models constructed in our approach, extensive experimental results show that high positioning accuracy (mean ~ 30.9cm, stdev.
~ 15.4cm) can be achieved, which outperforms existing vision-based algorithms.
Read – The Croc Ate My Homework
Today I finished reading “The Croc Ate My Homework: A Pearls Before Swine Collection” by Stephan Pastis
Read – The Little Nugget
Today I finished reading “The Little Nugget” by P.G. Wodehouse
Read – Fundamentals of Adventure Game Design
Today I finished reading “Fundamentals of Adventure Game Design” by Ernest Adams
Paper – Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials
Today I read a paper titled “Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials”
The abstract is:
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits.
Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties.
However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials.
In this paper, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object.
We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing.
The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
Read – The Swords of Lankhmar
Today I finished reading “The Swords of Lankhmar” by Fritz Leiber
Read – Mrs Bradshaw’s Handbook
Today I finished reading “Mrs Bradshaw’s Handbook” by Terry Pratchett
Paper – HMM and DTW for evaluation of therapeutical gestures using kinect
Today I read a paper titled “HMM and DTW for evaluation of therapeutical gestures using kinect”
The abstract is:
Automatic recognition of the quality of movement in human beings is a challenging task, given the difficulty both in defining the constraints that make a movement correct, and the difficulty in using noisy data to determine if these constraints were satisfied.
This paper presents a method for the detection of deviations from the correct form in movements from physical therapy routines based on Hidden Markov Models, which is compared to Dynamic Time Warping.
The activities studied include upper an lower limbs movements, the data used comes from a Kinect sensor.
Correct repetitions of the activities of interest were recorded, as well as deviations from these correct forms.
The ability of the proposed approach to detect these deviations was studied.
Results show that a system based on HMM is much more likely to determine if a certain movement has deviated from the specification.
Read – The Martian
Today I finished reading “The Martian” by Andy Weir
Paper – Landmark-Guided Elastic Shape Analysis of Human Character Motions
Today I read a paper titled “Landmark-Guided Elastic Shape Analysis of Human Character Motions”
The abstract is:
Motions of virtual characters in movies or video games are typically generated by recording actors using motion capturing methods.
Animations generated this way often need postprocessing, such as improving the periodicity of cyclic animations or generating entirely new motions by interpolation of existing ones.
Furthermore, search and classification of recorded motions becomes more and more important as the amount of recorded motion data grows.
In this paper, we will apply methods from shape analysis to the processing of animations.
More precisely, we will use the by now classical elastic metric model used in shape matching, and extend it by incorporating additional inexact feature point information, which leads to an improved temporal alignment of different animations.
Paper – The History of Mobile Augmented Reality
Today I read a paper titled “The History of Mobile Augmented Reality”
The abstract is:
This document summarizes the major milestones in mobile Augmented Reality between 1968 and 2014.
Major parts of the list were compiled by the member of the Christian Doppler Laboratory for Handheld Augmented Reality in 2010 (author list in alphabetical order) for the ISMAR society.
Later in 2013 it was updated, and more recent work was added during preparation of this report.
Permission is granted to copy and modify.
Paper – Simplified Boardgames
Today I read a paper titled “Simplified Boardgames”
The abstract is:
We formalize Simplified Boardgames language, which describes a subclass of arbitrary board games.
The language structure is based on the regular expressions, which makes the rules easily machine-processable while keeping the rules concise and fairly human-readable.
Paper – Debugging Machine Learning Tasks
Today I read a paper titled “Debugging Machine Learning Tasks”
The abstract is:
Unlike traditional programs (such as operating systems or word processors) which have large amounts of code, machine learning tasks use programs with relatively small amounts of code (written in machine learning libraries), but voluminous amounts of data.
Just like developers of traditional programs debug errors in their code, developers of machine learning tasks debug and fix errors in their data.
However, algorithms and tools for debugging and fixing errors in data are less common, when compared to their counterparts for detecting and fixing errors in code.
In this paper, we consider classification tasks where errors in training data lead to misclassifications in test points, and propose an automated method to find the root causes of such misclassifications.
Our root cause analysis is based on Pearl’s theory of causation, and uses Pearl’s PS (Probability of Sufficiency) as a scoring metric.
Our implementation, Psi, encodes the computation of PS as a probabilistic program, and uses recent work on probabilistic programs and transformations on probabilistic programs (along with gray-box models of machine learning algorithms) to efficiently compute PS.
Psi is able to identify root causes of data errors in interesting data sets.
Read – Working Effectively with Legacy Code
Today I finished reading “Working Effectively with Legacy Code” by Michael C. Feathers
Read – Stardust Memories
Today I finished reading “Stardust Memories” by Yukinobu Hoshino
Studying – Transforming a photo into a painting with Photoshop
This month I am studying “Transforming a photo into a painting with Photoshop”
My technical Photoshop skills are pretty sharp.
My creative Photoshop skills, not so much.
I am always open to learning a new creative technique because I generally suck at them.
Read – Clinical Procedures in Emergency Medicine
Today I finished reading “Clinical Procedures in Emergency Medicine” by James R. Roberts
Read – The Stainless Steel Rat Joins the Circus
Today I finished reading “The Stainless Steel Rat Joins the Circus” by Harry Harrison
Paper – Immersive Augmented Reality Training for Complex Manufacturing Scenarios
Today I read a paper titled “Immersive Augmented Reality Training for Complex Manufacturing Scenarios”
The abstract is:
In the complex manufacturing sector a considerable amount of resources are focused on developing new skills and training workers.
In that context, increasing the effectiveness of those processes and reducing the investment required is an outstanding issue.
In this paper we present an experiment that shows how modern Human Computer Interaction (HCI) metaphors such as collaborative mixed-reality can be used to transmit procedural knowledge and could eventually replace other forms of face-to-face training.
We implement a real-time Immersive Augmented Reality (IAR) setup with see-through cameras that allows for collaborative interactions that can simulate conventional forms of training.
The obtained results indicate that people who took the IAR training achieved the same performance than people in the conventional face-to-face training condition.
These results, their implications for future training and the use of HCI paradigms in this context are discussed in this paper.
Read – The Practical Princess and Other Liberating Fairy Tales
Today I finished reading “The Practical Princess and Other Liberating Fairy Tales” by Jay Williams
Read – Oxford Handbook of Emergency Medicine
Today I finished reading “Oxford Handbook of Emergency Medicine” by Jonathan Wyatt
Read – The Clicking of Cuthbert
Today I finished reading “The Clicking of Cuthbert” by P.G. Wodehouse
Paper – Heat as an inertial force: A quantum equivalence principle
Today I read a paper titled “Heat as an inertial force: A quantum equivalence principle”
The abstract is:
The firewall was introduced into black hole evaporation scenarios as a deus ex machina designed to break entanglements and preserve unitarity (Almheiri et.al., 2013).
Here we show that the firewall actually exists and does break entanglements, but only in the context of a virtual reality for observers stationed near the horizon, who are following the long-term evolution of the hole.
These observers are heated by acceleration radiation at the Unruh temperature and see pair creation at the horizon as a high-energy phenomenon.
The objective reality is very different.
We argue that Hawking pair creation is entirely a low-energy process in which entanglements never arise.
The Hawking particles materialize as low-energy excitations with typical wavelength considerably larger than the black hole radius.
They thus emerge into a very non-uniform environment inimical to entanglement-formation.
Paper – Towards Reversible De-Identification in Video Sequences Using 3D Avatars and Steganography
Today I read a paper titled “Towards Reversible De-Identification in Video Sequences Using 3D Avatars and Steganography”
The abstract is:
We propose a de-identification pipeline that protects the privacy of humans in video sequences by replacing them with rendered 3D human models, hence concealing their identity while retaining the naturalness of the scene.
The original images of humans are steganographically encoded in the carrier image, i.e.
the image containing the original scene and the rendered 3D human models.
We qualitatively explore the feasibility of our approach, utilizing the Kinect sensor and its libraries to detect and localize human joints.
A 3D avatar is rendered into the scene using the obtained joint positions, and the original human image is steganographically encoded in the new scene.
Our qualitative evaluation shows reasonably good results that merit further exploration.
Read – The Gypsy Morph
Today I finished reading “The Gypsy Morph” by Terry Brooks
Read – The Art of Readable Code
Today I finished reading “The Art of Readable Code” by Dustin Boswell
Good ideas. Many I agree with. Some I very much don’t. But like everything that is opinionated & style-based, what’s the saying? Fashions come and go.
But I will agree with this, a good aesthetic style makes code easily readable. And code is read far more than it is written.
Whether you agree with the contents of the book or not, I think this book should be one of those “required readings” books that every programmer should be made to read at least once in their life.