Today I finished reading “The Breakthrough Principle of 16x” by Richard Koch
Read – Teaching What Really Happened
Today I finished reading “Teaching What Really Happened: How to Avoid the Tyranny of Textbooks and Get Students Excited About Doing History” by James W. Loewen
Studying – Photoshop one-on-one intermediate
This month I am studying “Photoshop one-on-one intermediate”
Got through the fundamentals class faster than I expected so spent the remainder of the month just doing more self-directed exercises.
This month, taking it to the next level with the “intermediate” class.
Read – King of the Comics
Today I finished reading “King of the Comics: A Pearls Before Swine Collection” by Stephan Pastis
Paper – A particle filter to reconstruct a free-surface flow from a depth camera
Today I read a paper titled “A particle filter to reconstruct a free-surface flow from a depth camera”
The abstract is:
We investigate the combined use of a Kinect depth sensor and of a stochastic data assimilation method to recover free-surface flows.
More specifically, we use a Weighted ensemble Kalman filter method to reconstruct the complete state of free-surface flows from a sequence of depth images only.
This particle filter accounts for model and observations errors.
This data assimilation scheme is enhanced with the use of two observations instead of one classically.
We evaluate the developed approach on two numerical test cases: a collapse of a water column as a toy-example and a flow in an suddenly expanding flume as a more realistic flow.
The robustness of the method to depth data errors and also to initial and inflow conditions is considered.
We illustrate the interest of using two observations instead of one observation into the correction step, especially for unknown inflow boundary conditions.
Then, the performance of the Kinect sensor to capture temporal sequences of depth observations is investigated.
Finally, the efficiency of the algorithm is qualified for a wave in a real rectangular flat bottom tank.
It is shown that for basic initial conditions, the particle filter rapidly and remarkably reconstructs velocity and height of the free surface flow based on noisy measurements of the elevation alone.
Paper – Bayesian Opponent Exploitation in Imperfect-Information Games
Today I read a paper titled “Bayesian Opponent Exploitation in Imperfect-Information Games”
The abstract is:
The two most fundamental problems in computational game theory are computing a Nash equilibrium and learning to exploit opponents given observations of their play (aka opponent exploitation).
The latter is perhaps even more important than the former: Nash equilibrium does not have a compelling theoretical justification in game classes other than two-player zero-sum, and furthermore for all games one can potentially do better by exploiting perceived weaknesses of the opponent than by following a static equilibrium strategy throughout the match.
The natural setting for opponent exploitation is the Bayesian setting where we have a prior model that is integrated with observations to create a posterior opponent model that we respond to.
The most natural, and a well-studied prior distribution is the Dirichlet distribution.
An exact polynomial-time algorithm is known for best-responding to the posterior distribution for an opponent assuming a Dirichlet prior with multinomial sampling in the case of normal-form games; however, for the case of imperfect-information games the best known algorithm is a sampling algorithm based on approximating an infinite integral without theoretical guarantees.
The main result is the first exact algorithm for accomplishing this in imperfect-information games.
We also present an algorithm for another natural setting where the prior is the uniform distribution over a polyhedron.
Paper – A Real-Time Soft Robotic Patient Positioning System for Maskless Head-and-Neck Cancer Radiotherapy: An Initial Investigation
Today I read a paper titled “A Real-Time Soft Robotic Patient Positioning System for Maskless Head-and-Neck Cancer Radiotherapy: An Initial Investigation”
The abstract is:
We present an initial examination of a novel approach to accurately position a patient during head and neck intensity modulated radiotherapy (IMRT).
Position-based visual-servoing of a radio-transparent soft robot is used to control the flexion/extension cranial motion of a manikin head.
A Kinect RGB-D camera is used to measure head position and the error between the sensed and desired position is used to control a pneumatic system which regulates pressure within an inflatable air bladder (IAB).
Results show that the system is capable of controlling head motion to within 2mm with respect to a reference trajectory.
This establishes proof-of-concept that using multiple IABs and actuators can improve cancer treatment.
Paper – Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks
Today I read a paper titled “Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks”
The abstract is:
This paper presents to the best of our knowledge the first end-to-end object tracking approach which directly maps from raw sensor input to object tracks in sensor space without requiring any feature engineering or system identification in the form of plant or sensor models.
Specifically, our system accepts a stream of raw sensor data at one end and, in real-time, produces an estimate of the entire environment state at the output including even occluded objects.
We achieve this by framing the problem as a deep learning task and exploit sequence models in the form of recurrent neural networks to learn a mapping from sensor measurements to object tracks.
In particular, we propose a learning method based on a form of input dropout which allows learning in an unsupervised manner, only based on raw, occluded sensor data without access to ground-truth annotations.
We demonstrate our approach using a synthetic dataset designed to mimic the task of tracking objects in 2D laser data — as commonly encountered in robotics applications — and show that it learns to track many dynamic objects despite occlusions and the presence of sensor noise.
Read – The Market Square Dog
Today I finished reading “The Market Square Dog” by James Herriot
Read – Human Error Procesor 1.5
Today I finished reading “Human Error Procesor 1.5” by Shirow Masamune
Read – Contagious: Why Things Catch On
Today I finished reading “Contagious: Why Things Catch On” by Jonah Berger
Read – God’s Debris
Today I finished reading “God’s Debris: A Thought Experiment” by Scott Adams
Read – Bill the Conqueror
Today I finished reading “Bill the Conqueror” by P.G. Wodehouse
Paper – 11 x 11 Domineering is Solved: The first player wins
Today I read a paper titled “11 x 11 Domineering is Solved: The first player wins”
The abstract is:
We have developed a program called MUDoS (Maastricht University Domineering Solver) that solves Domineering positions in a very efficient way.
This enables the solution of known positions so far (up to the 10 x 10 board) much quicker (measured in number of investigated nodes).
More importantly, it enables the solution of the 11 x 11 Domineering board, a board up till now far out of reach of previous Domineering solvers.
The solution needed the investigation of 259,689,994,008 nodes, using almost half a year of computation time on a single simple desktop computer.
The results show that under optimal play the first player wins the 11 x 11 Domineering game, irrespective if Vertical or Horizontal starts the game.
In addition, several other boards hitherto unsolved were solved.
Using the convention that Vertical starts, the 8 x 15, 11 x 9, 12 x 8, 12 x 15, 14 x 8, and 17 x 6 boards are all won by Vertical, whereas the 6 x 17, 8 x 12, 9 x 11, and 11 x 10 boards are all won by Horizontal.
Read – Retrograde Summer
Today I finished reading “Retrograde Summer” by John Varley
Studying – Photoshop one-on-one fundamentals
This month I am studying “Photoshop one-on-one fundamentals”
I figured it was about time I actually sat down and updated my Adobe Photoshop skills for the new version.
Expecting this will take me most of the month if I do all of the exercises and any extra exercises too.
Read – Everyone Knows What a Dragon Looks Like
Today I finished reading “Everyone Knows What a Dragon Looks Like” by Jay Williams
Read – Empowered, Volume 7
Today I finished reading “Empowered, Volume 7” by Adam Warren
Read – Intron Depot 7 : Barb Wire 02
Today I finished reading “Intron Depot 7 : Barb Wire 02” by Masamune Shirow
Paper – Autonomous Vehicle Routing in Congested Road Networks
Today I read a paper titled “Autonomous Vehicle Routing in Congested Road Networks”
The abstract is:
This paper considers the problem of routing and rebalancing a shared fleet of autonomous (i.e., self-driving) vehicles providing on-demand mobility within a capacitated transportation network, where congestion might disrupt throughput.
We model the problem within a network flow framework and show that under relatively mild assumptions the rebalancing vehicles, if properly coordinated, do not lead to an increase in congestion (in stark contrast to common belief).
From an algorithmic standpoint, such theoretical insight suggests that the problem of routing customers and rebalancing vehicles can be decoupled, which leads to a computationally-efficient routing and rebalancing algorithm for the autonomous vehicles.
Numerical experiments and case studies corroborate our theoretical insights and show that the proposed algorithm outperforms state-of-the-art point-to-point methods by avoiding excess congestion on the road.
Collectively, this paper provides a rigorous approach to the problem of congestion-aware, system-wide coordination of autonomously driving vehicles, and to the characterization of the sustainability of such robotic systems.
Read – Fact. Fact. Bullsh*t!
Today I finished reading “Fact. Fact. Bullsh*t!: Learn the Truth and Spot the Lie on Everything from Tequila-Made Diamonds to Tetris’s Soviet Roots – Plus Tons of Other Totally Random Facts from Science, History and Beyond!” by Neil Patrick Stewart
Paper – Fast keypoint detection in video sequences
Today I read a paper titled “Fast keypoint detection in video sequences”
The abstract is:
A number of computer vision tasks exploit a succinct representation of the visual content in the form of sets of local features.
Given an input image, feature extraction algorithms identify a set of keypoints and assign to each of them a description vector, based on the characteristics of the visual content surrounding the interest point.
Several tasks might require local features to be extracted from a video sequence, on a frame-by-frame basis.
Although temporal downsampling has been proven to be an effective solution for mobile augmented reality and visual search, high temporal resolution is a key requirement for time-critical applications such as object tracking, event recognition, pedestrian detection, surveillance.
In recent years, more and more computationally efficient visual feature detectors and decriptors have been proposed.
Nonetheless, such approaches are tailored to still images.
In this paper we propose a fast keypoint detection algorithm for video sequences, that exploits the temporal coherence of the sequence of keypoints.
According to the proposed method, each frame is preprocessed so as to identify the parts of the input frame for which keypoint detection and description need to be performed.
Our experiments show that it is possible to achieve a reduction in computational time of up to 40%, without significantly affecting the task accuracy.
Read – Wodehouse Is The Best Medicine
Today I finished reading “Wodehouse Is The Best Medicine” by P.G. Wodehouse
Paper – Unsupervised Learning in Neuromemristive Systems
Today I read a paper titled “Unsupervised Learning in Neuromemristive Systems”
The abstract is:
Neuromemristive systems (NMSs) currently represent the most promising platform to achieve energy efficient neuro-inspired computation.
However, since the research field is less than a decade old, there are still countless algorithms and design paradigms to be explored within these systems.
One particular domain that remains to be fully investigated within NMSs is unsupervised learning.
In this work, we explore the design of an NMS for unsupervised clustering, which is a critical element of several machine learning algorithms.
Using a simple memristor crossbar architecture and learning rule, we are able to achieve performance which is on par with MATLAB’s k-means clustering.
Paper – The Bees Algorithm for the Vehicle Routing Problem
Today I read a paper titled “The Bees Algorithm for the Vehicle Routing Problem”
The abstract is:
In this thesis we present a new algorithm for the Vehicle Routing Problem called the Enhanced Bees Algorithm.
It is adapted from a fairly recent algorithm, the Bees Algorithm, which was developed for continuous optimisation problems.
We show that the results obtained by the Enhanced Bees Algorithm are competitive with the best meta-heuristics available for the Vehicle Routing Problem (within 0.5% of the optimal solution for common benchmark problems).
We show that the algorithm has good runtime performance, producing results within 2% of the optimal solution within 60 seconds, making it suitable for use within real world dispatch scenarios.
Paper – Automatic Face Reenactment
Today I read a paper titled “Automatic Face Reenactment”
The abstract is:
We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target performance.
Our system is fully automatic and does not require a database of source expressions.
Instead, it is able to produce convincing reenactment results from a short source video captured with an off-the-shelf camera, such as a webcam, where the user performs arbitrary facial gestures.
Our reenactment pipeline is conceived as part image retrieval and part face transfer: The image retrieval is based on temporal clustering of target frames and a novel image matching metric that combines appearance and motion to select candidate frames from the source video, while the face transfer uses a 2D warping strategy that preserves the user’s identity.
Our system excels in simplicity as it does not rely on a 3D face model, it is robust under head motion and does not require the source and target performance to be similar.
We show convincing reenactment results for videos that we recorded ourselves and for low-quality footage taken from the Internet.
Studying – Baking Advanced pastry techniques
This month I am studying “Baking – Advanced pastry techniques”
The 2nd month of my advanced pastry techniques
Update: That… was hard work. And fun.
Paper – A Novel Human Computer Interaction Platform based College Mathematical Education Methodology
Today I read a paper titled “A Novel Human Computer Interaction Platform based College Mathematical Education Methodology”
The abstract is:
This article proposes the analysis on novel human computer interaction (HCI) platform based college mathematical education methodology.
Above for the application of virtual reality technology in teaching the problems in the study, only through the organization focus on the professional and technical personnel, and constantly improve researchers in development process of professional knowledge, close to the actual needs of the teaching can we achieve the satisfactory result.
To obtain better education output, we combine the Kinect to form the HCI based teaching environment.
We firstly review the latest HCI technique and principles of college math courses, then we introduce basic components of the Kinect including the gesture segmentation, systematic implementation and the primary characteristics of the platform.
As the further step, we implement the system with the re-write of script code to build up the personalized HCI assisted education scenario.
The verification and simulation proves the feasibility of our method.
Read – The Snowball
Today I finished reading “The Snowball: Warren Buffett and the Business of Life” by Alice Schroeder
Paper – Heuristics for Planning, Plan Recognition and Parsing
Today I read a paper titled “Heuristics for Planning, Plan Recognition and Parsing”
The abstract is:
In a recent paper, we have shown that Plan Recognition over STRIPS can be formulated and solved using Classical Planning heuristics and algorithms.
In this work, we show that this formulation subsumes the standard formulation of Plan Recognition over libraries through a compilation of libraries into STRIPS theories.
The libraries correspond to AND/OR graphs that may be cyclic and where children of AND nodes may be partially ordered.
These libraries include Context-Free Grammars as a special case, where the Plan Recognition problem becomes a parsing with missing tokens problem.
Plan Recognition over the standard libraries become Planning problems that can be easily solved by any modern planner, while recognition over more complex libraries, including Context-Free Grammars (CFGs), illustrate limitations of current Planning heuristics and suggest improvements that may be relevant in other Planning problems too.
Read – Frek and the Elixir
Today I finished reading “Frek and the Elixir” by Rudy Rucker
Read – The Heart of a Goof
Today I finished reading “The Heart of a Goof” by P.G. Wodehouse
Read – James Herriot’s Animal Stories
Today I finished reading “James Herriot’s Animal Stories” by James Herriot
Read – Uncle Dynamite
Today I finished reading “Uncle Dynamite” by P.G. Wodehouse
Read – Pearls, Girls And Monty Bodkin
Today I finished reading “Pearls, Girls And Monty Bodkin” by P.G. Wodehouse
Read – The Guild: Tink #2
Today I finished reading “The Guild: Tink #2” by Felicia Day
Studying – Baking Advanced pastry techniques
This month I am studying “Baking – Advanced pastry techniques”
The 1st month of advanced pastry techniques.
There’s a two month (four nights a week) class at the local pastry school.
And you think I am going to pass that up?
Update: Advanced means advanced and some students do not fucking understand what the word “advanced” actually fucking means.
Read – Usagi Yojimbo, Vol. 29: Two Hundred Jizo
Today I finished reading “Usagi Yojimbo, Vol. 29: Two Hundred Jizo” by Stan Sakai
Read – The Measure of the Magic
Today I finished reading “The Measure of the Magic” by Terry Brooks
Read – A Sociopath’s Guide to Friendship
Today I finished reading “A Sociopath’s Guide to Friendship” by Stephan Pastis
Paper – Probably Approximately Correct Greedy Maximization
Today I read a paper titled “Probably Approximately Correct Greedy Maximization”
The abstract is:
Submodular function maximization finds application in a variety of real-world decision-making problems.
However, most existing methods, based on greedy maximization, assume it is computationally feasible to evaluate F, the function being maximized.
Unfortunately, in many realistic settings F is too expensive to evaluate exactly even once.
We present probably approximately correct greedy maximization, which requires access only to cheap anytime confidence bounds on F and uses them to prune elements.
We show that, with high probability, our method returns an approximately optimal set.
We propose novel, cheap confidence bounds for conditional entropy, which appears in many common choices of F and for which it is difficult to find unbiased or bounded estimates.
Finally, results on a real-world dataset from a multi-camera tracking system in a shopping mall demonstrate that our approach performs comparably to existing methods, but at a fraction of the computational cost.
Read – Conan Volume 19: Xuthal of the Dusk
Today I finished reading “Conan Volume 19: Xuthal of the Dusk” by Fred Van Lente
Paper – An Online Mechanism for Ridesharing in Autonomous Mobility-on-Demand Systems
Today I read a paper titled “An Online Mechanism for Ridesharing in Autonomous Mobility-on-Demand Systems”
The abstract is:
With proper management, Autonomous Mobility-on-Demand (AMoD) systems have great potential to satisfy the transport demands of urban populations by providing safe, convenient, and affordable ridesharing services.
Meanwhile, such systems can substantially decrease private car ownership and use, and thus significantly reduce traffic congestion, energy consumption, and carbon emissions.
To achieve this objective, an AMoD system requires private information about the demand from passengers.
However, due to self-interestedness, passengers are unlikely to cooperate with the service providers in this regard.
Therefore, an online mechanism is desirable if it incentivizes passengers to truthfully report their actual demand.
For the purpose of promoting ridesharing, we hereby introduce a posted-price, integrated online ridesharing mechanism (IORS) that satisfies desirable properties such as ex-post incentive compatibility, individual rationality, and budget-balance.
Numerical results indicate the competitiveness of IORS compared with two benchmarks, namely the optimal assignment and an offline, auction-based mechanism.
Read – The Gem Collector
Today I finished reading “The Gem Collector” by P.G. Wodehouse
Paper – To Know Where We Are: Vision-Based Positioning in Outdoor Environments
Today I read a paper titled “To Know Where We Are: Vision-Based Positioning in Outdoor Environments”
The abstract is:
Augmented reality (AR) displays become more and more popular recently, because of its high intuitiveness for humans and high-quality head-mounted display have rapidly developed.
To achieve such displays with augmented information, highly accurate image registration or ego-positioning are required, but little attention have been paid for out-door environments.
This paper presents a method for ego-positioning in outdoor environments with low cost monocular cameras.
To reduce the computational and memory requirements as well as the communication overheads, we formulate the model compression algorithm as a weighted k-cover problem for better preserving model structures.
Specifically for real-world vision-based positioning applications, we consider the issues with large scene change and propose a model update algorithm to tackle these problems.
A long- term positioning dataset with more than one month, 106 sessions, and 14,275 images is constructed.
Based on both local and up-to-date models constructed in our approach, extensive experimental results show that high positioning accuracy (mean ~ 30.9cm, stdev.
~ 15.4cm) can be achieved, which outperforms existing vision-based algorithms.
Read – The Croc Ate My Homework
Today I finished reading “The Croc Ate My Homework: A Pearls Before Swine Collection” by Stephan Pastis
Read – The Little Nugget
Today I finished reading “The Little Nugget” by P.G. Wodehouse
Read – Fundamentals of Adventure Game Design
Today I finished reading “Fundamentals of Adventure Game Design” by Ernest Adams
Paper – Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials
Today I read a paper titled “Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials”
The abstract is:
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits.
Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties.
However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials.
In this paper, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object.
We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing.
The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
Read – The Swords of Lankhmar
Today I finished reading “The Swords of Lankhmar” by Fritz Leiber