Today I finished reading “The Heart of a Goof” by P.G. Wodehouse
Today I finished reading “James Herriot’s Animal Stories” by James Herriot
Today I finished reading “Uncle Dynamite” by P.G. Wodehouse
Today I finished reading “Pearls, Girls And Monty Bodkin” by P.G. Wodehouse
Today I finished reading “The Guild: Tink #2” by Felicia Day
This month I am studying “Baking – Advanced pastry techniques”
The 1st month of advanced pastry techniques.
There’s a two month (four nights a week) class at the local pastry school.
And you think I am going to pass that up?
Update: Advanced means advanced and some students do not fucking understand what the word “advanced” actually fucking means.
Today I finished reading “Usagi Yojimbo, Vol. 29: Two Hundred Jizo” by Stan Sakai
Today I finished reading “The Measure of the Magic” by Terry Brooks
Today I finished reading “A Sociopath’s Guide to Friendship” by Stephan Pastis
Today I read a paper titled “Probably Approximately Correct Greedy Maximization”
The abstract is:
Submodular function maximization finds application in a variety of real-world decision-making problems.
However, most existing methods, based on greedy maximization, assume it is computationally feasible to evaluate F, the function being maximized.
Unfortunately, in many realistic settings F is too expensive to evaluate exactly even once.
We present probably approximately correct greedy maximization, which requires access only to cheap anytime confidence bounds on F and uses them to prune elements.
We show that, with high probability, our method returns an approximately optimal set.
We propose novel, cheap confidence bounds for conditional entropy, which appears in many common choices of F and for which it is difficult to find unbiased or bounded estimates.
Finally, results on a real-world dataset from a multi-camera tracking system in a shopping mall demonstrate that our approach performs comparably to existing methods, but at a fraction of the computational cost.
Today I finished reading “Conan Volume 19: Xuthal of the Dusk” by Fred Van Lente
Today I read a paper titled “An Online Mechanism for Ridesharing in Autonomous Mobility-on-Demand Systems”
The abstract is:
With proper management, Autonomous Mobility-on-Demand (AMoD) systems have great potential to satisfy the transport demands of urban populations by providing safe, convenient, and affordable ridesharing services.
Meanwhile, such systems can substantially decrease private car ownership and use, and thus significantly reduce traffic congestion, energy consumption, and carbon emissions.
To achieve this objective, an AMoD system requires private information about the demand from passengers.
However, due to self-interestedness, passengers are unlikely to cooperate with the service providers in this regard.
Therefore, an online mechanism is desirable if it incentivizes passengers to truthfully report their actual demand.
For the purpose of promoting ridesharing, we hereby introduce a posted-price, integrated online ridesharing mechanism (IORS) that satisfies desirable properties such as ex-post incentive compatibility, individual rationality, and budget-balance.
Numerical results indicate the competitiveness of IORS compared with two benchmarks, namely the optimal assignment and an offline, auction-based mechanism.
Today I finished reading “The Gem Collector” by P.G. Wodehouse
Today I read a paper titled “To Know Where We Are: Vision-Based Positioning in Outdoor Environments”
The abstract is:
Augmented reality (AR) displays become more and more popular recently, because of its high intuitiveness for humans and high-quality head-mounted display have rapidly developed.
To achieve such displays with augmented information, highly accurate image registration or ego-positioning are required, but little attention have been paid for out-door environments.
This paper presents a method for ego-positioning in outdoor environments with low cost monocular cameras.
To reduce the computational and memory requirements as well as the communication overheads, we formulate the model compression algorithm as a weighted k-cover problem for better preserving model structures.
Specifically for real-world vision-based positioning applications, we consider the issues with large scene change and propose a model update algorithm to tackle these problems.
A long- term positioning dataset with more than one month, 106 sessions, and 14,275 images is constructed.
Based on both local and up-to-date models constructed in our approach, extensive experimental results show that high positioning accuracy (mean ~ 30.9cm, stdev.
~ 15.4cm) can be achieved, which outperforms existing vision-based algorithms.
Today I finished reading “The Croc Ate My Homework: A Pearls Before Swine Collection” by Stephan Pastis
Today I finished reading “The Little Nugget” by P.G. Wodehouse
Today I finished reading “Fundamentals of Adventure Game Design” by Ernest Adams
Today I read a paper titled “Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials”
The abstract is:
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits.
Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties.
However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials.
In this paper, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object.
We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing.
The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
Today I finished reading “The Swords of Lankhmar” by Fritz Leiber
Today I finished reading “Mrs Bradshaw’s Handbook” by Terry Pratchett
Today I read a paper titled “HMM and DTW for evaluation of therapeutical gestures using kinect”
The abstract is:
Automatic recognition of the quality of movement in human beings is a challenging task, given the difficulty both in defining the constraints that make a movement correct, and the difficulty in using noisy data to determine if these constraints were satisfied.
This paper presents a method for the detection of deviations from the correct form in movements from physical therapy routines based on Hidden Markov Models, which is compared to Dynamic Time Warping.
The activities studied include upper an lower limbs movements, the data used comes from a Kinect sensor.
Correct repetitions of the activities of interest were recorded, as well as deviations from these correct forms.
The ability of the proposed approach to detect these deviations was studied.
Results show that a system based on HMM is much more likely to determine if a certain movement has deviated from the specification.
Today I finished reading “The Martian” by Andy Weir
Today I read a paper titled “Landmark-Guided Elastic Shape Analysis of Human Character Motions”
The abstract is:
Motions of virtual characters in movies or video games are typically generated by recording actors using motion capturing methods.
Animations generated this way often need postprocessing, such as improving the periodicity of cyclic animations or generating entirely new motions by interpolation of existing ones.
Furthermore, search and classification of recorded motions becomes more and more important as the amount of recorded motion data grows.
In this paper, we will apply methods from shape analysis to the processing of animations.
More precisely, we will use the by now classical elastic metric model used in shape matching, and extend it by incorporating additional inexact feature point information, which leads to an improved temporal alignment of different animations.
Today I read a paper titled “The History of Mobile Augmented Reality”
The abstract is:
This document summarizes the major milestones in mobile Augmented Reality between 1968 and 2014.
Major parts of the list were compiled by the member of the Christian Doppler Laboratory for Handheld Augmented Reality in 2010 (author list in alphabetical order) for the ISMAR society.
Later in 2013 it was updated, and more recent work was added during preparation of this report.
Permission is granted to copy and modify.
Today I read a paper titled “Simplified Boardgames”
The abstract is:
We formalize Simplified Boardgames language, which describes a subclass of arbitrary board games.
The language structure is based on the regular expressions, which makes the rules easily machine-processable while keeping the rules concise and fairly human-readable.
Today I read a paper titled “Debugging Machine Learning Tasks”
The abstract is:
Unlike traditional programs (such as operating systems or word processors) which have large amounts of code, machine learning tasks use programs with relatively small amounts of code (written in machine learning libraries), but voluminous amounts of data.
Just like developers of traditional programs debug errors in their code, developers of machine learning tasks debug and fix errors in their data.
However, algorithms and tools for debugging and fixing errors in data are less common, when compared to their counterparts for detecting and fixing errors in code.
In this paper, we consider classification tasks where errors in training data lead to misclassifications in test points, and propose an automated method to find the root causes of such misclassifications.
Our root cause analysis is based on Pearl’s theory of causation, and uses Pearl’s PS (Probability of Sufficiency) as a scoring metric.
Our implementation, Psi, encodes the computation of PS as a probabilistic program, and uses recent work on probabilistic programs and transformations on probabilistic programs (along with gray-box models of machine learning algorithms) to efficiently compute PS.
Psi is able to identify root causes of data errors in interesting data sets.
Today I finished reading “Working Effectively with Legacy Code” by Michael C. Feathers
Today I finished reading “Stardust Memories” by Yukinobu Hoshino
This month I am studying “Transforming a photo into a painting with Photoshop”
My technical Photoshop skills are pretty sharp.
My creative Photoshop skills, not so much.
I am always open to learning a new creative technique because I generally suck at them.
Today I finished reading “Clinical Procedures in Emergency Medicine” by James R. Roberts
Today I finished reading “The Stainless Steel Rat Joins the Circus” by Harry Harrison
Today I read a paper titled “Immersive Augmented Reality Training for Complex Manufacturing Scenarios”
The abstract is:
In the complex manufacturing sector a considerable amount of resources are focused on developing new skills and training workers.
In that context, increasing the effectiveness of those processes and reducing the investment required is an outstanding issue.
In this paper we present an experiment that shows how modern Human Computer Interaction (HCI) metaphors such as collaborative mixed-reality can be used to transmit procedural knowledge and could eventually replace other forms of face-to-face training.
We implement a real-time Immersive Augmented Reality (IAR) setup with see-through cameras that allows for collaborative interactions that can simulate conventional forms of training.
The obtained results indicate that people who took the IAR training achieved the same performance than people in the conventional face-to-face training condition.
These results, their implications for future training and the use of HCI paradigms in this context are discussed in this paper.
Today I finished reading “The Practical Princess and Other Liberating Fairy Tales” by Jay Williams
Today I finished reading “Oxford Handbook of Emergency Medicine” by Jonathan Wyatt
Today I finished reading “The Clicking of Cuthbert” by P.G. Wodehouse
Today I read a paper titled “Heat as an inertial force: A quantum equivalence principle”
The abstract is:
The firewall was introduced into black hole evaporation scenarios as a deus ex machina designed to break entanglements and preserve unitarity (Almheiri et.al., 2013).
Here we show that the firewall actually exists and does break entanglements, but only in the context of a virtual reality for observers stationed near the horizon, who are following the long-term evolution of the hole.
These observers are heated by acceleration radiation at the Unruh temperature and see pair creation at the horizon as a high-energy phenomenon.
The objective reality is very different.
We argue that Hawking pair creation is entirely a low-energy process in which entanglements never arise.
The Hawking particles materialize as low-energy excitations with typical wavelength considerably larger than the black hole radius.
They thus emerge into a very non-uniform environment inimical to entanglement-formation.
Today I read a paper titled “Towards Reversible De-Identification in Video Sequences Using 3D Avatars and Steganography”
The abstract is:
We propose a de-identification pipeline that protects the privacy of humans in video sequences by replacing them with rendered 3D human models, hence concealing their identity while retaining the naturalness of the scene.
The original images of humans are steganographically encoded in the carrier image, i.e.
the image containing the original scene and the rendered 3D human models.
We qualitatively explore the feasibility of our approach, utilizing the Kinect sensor and its libraries to detect and localize human joints.
A 3D avatar is rendered into the scene using the obtained joint positions, and the original human image is steganographically encoded in the new scene.
Our qualitative evaluation shows reasonably good results that merit further exploration.
Today I finished reading “The Gypsy Morph” by Terry Brooks
Today I finished reading “The Art of Readable Code” by Dustin Boswell
Good ideas. Many I agree with. Some I very much don’t. But like everything that is opinionated & style-based, what’s the saying? Fashions come and go.
But I will agree with this, a good aesthetic style makes code easily readable. And code is read far more than it is written.
Whether you agree with the contents of the book or not, I think this book should be one of those “required readings” books that every programmer should be made to read at least once in their life.
Today I finished reading “Morgawr” by Terry Brooks
Another Shannara book. And whilst I enjoy the world, I cannot help but feel I’ve been here before. The premise was strong, the series started out well, but it, much like the airship Jerle Shannara, seemed to drift aimlessly at times, awkwardly stumbling from one set piece to another. I’ve got very mixed feelings about this book, on the one hand, the narrative moves fast in this book, unlike the earlier books of the series that are positively glacial at times (lots of scene setting). On the other hand, I got the sense it was moving fast just to move fast.
Brooks, over the years, has become stronger as a writer than I could have ever thought. I cannot read his earlier works, but I am also getting the sense that he is treading old ground at times. One of the things I have always liked about Brooks’ writing though is the fact he is willing to kill his children if the story arc dictates it, and this series has not disappointed.
Today I finished reading “Fundamentals of Sports Game Design” by Ernest Adams.
Adams has been prolific in his writings about game design and all the different niches that they cover, from the generic “game development” to role-playing games and even the more esoteric such the narrative skills required for game design. This was a nice roundup of the rules and design challenges of creating video games that concern, well… it’s in the title, sports games.”
Whilst the book doesn’t get in to the specifics of any particular sports game it has a definite slant to team vs team based sports. And team vs team based sports usually involve a ball or puck of some sort that the players hit around. The book deals with concepts such as physics, injuries and home-field advantages (the psychology of that specific topic at least) that teams may have. Each section doesn’t get bogged down too deeply on each topic, but I also felt that there was more that could have been written, but then, where do you draw that line?
In our modern IP (intellectual property) driven world there’s a brief section on licenses, trademarks and publicity rights. I didn’t feel these few pages were in-depth enough, but then, that said, if you are working on a sports title that is part of a larger ecosystem then your company lawyers are going to be more involved with that aspect of the business anyway.
Overall a fast read through, worth picking up, even (if like me who has sworn to never work on another sports title as long as I live) sports games are not on the horizon for your next few upcoming projects.
P.S. If you say “sports” too many times it no longer sounds like a real word.
This week I am studying “Beast Lighting”
Haven’t had much occasion to play around with the Beast Lighting plug-ins for either Maya or Unity 3D and I felt I was falling behind on both those fronts. Time to correct my oversight. Based on the (on-line, interactive) class content, it appears this will take me 30 or 50 hours of studying to plow through if I do all of the examples. It is supposed to be a three-month class, but I think I can get through it reasonably quickly if I push myself.
Today I read a paper titled “Look-ahead before you leap: end-to-end active recognition by forecasting the effect of motion”
My initial thoughts: A lot of vision recognition systems, especially the simpler ones, are based on static imagery. If recognition of a moving scene is deployed it is rarely predictive and rarely (if ever) takes in to account the motion of the observer. Actually having demonstrable theories about how to deploy vision recognition in an observer system that is itself moving is hugely beneficial in all sorts of field applications.
Update: I’ve read stuff about this previously, in some other papers. Hmmm… need to go back to the cited works to see who these guys are referencing when I get home this evening.
The abstract is:
Visual recognition systems mounted on autonomous moving agents face the challenge of unconstrained data, but simultaneously have the opportunity to improve their performance by moving to acquire new views of test data.
In this work, we first show how a recurrent neural network-based system may be trained to perform end-to-end learning of motion policies suited for the “active recognition” setting.
Further, we hypothesize that active vision requires an agent to have the capacity to reason about the effects of its motions on its view of the world.
To verify this hypothesis, we attempt to induce this capacity in our active recognition pipeline, by simultaneously learning to forecast the effects of the agent’s motions on its internal representation of its cumulative knowledge obtained from all past views.
Results across two challenging datasets confirm both that our end-to-end system successfully learns meaningful policies for active recognition, and that “learning to look ahead” further boosts recognition performance.
Today I read a paper titled “Creating Simplified 3D Models with High Quality Textures”
Initial thoughts: More Kinect research. It’s a fun little toy, and hooking up multiple Kinects together gives me a full 360 degree scanning system with very little occlusion. Have been integrating the Kinect scanners in to some Hololens AR experiments where I process the Kinect data off-board on a workstation, then feed that to the Hololens via a dedicated data server so that I can overlay a transformational image on to the user’s body. Think of it like being able to see yourself wearing an IronMan suit, or that scene in BladeRunner 2049 where the digital waifu overlays herself with the human sex worker for a weird Replicant, hologram, human three way.
Combining the ideas from this paper, with the observer motion predictive ideas from another Kinect paper I am working through opens up a huge opportunity to “body match” the hologram that the HoloLens is displaying with the human user. It is going to take some work, but I think I am on to something here.
This paper is sort of tangentially related because what I am trying to do is scan (without a lot of noise) the user, get high resolution, non-occluded data, process it on a “lots of cores” workstation with a couple of high-end GPUs, and then feed that pre-processed model data in to the HoloLens for real-time holographic overlay and real-time model distortion of the user. Basically, I am making a “digital fat suit/fun house mirror” that mimics the human as they move around.
The abstract is:
This paper presents an extension to the KinectFusion algorithm which allows creating simplified 3D models with high quality RGB textures.
This is achieved through (i) creating model textures using images from an HD RGB camera that is calibrated with Kinect depth camera, (ii) using a modified scheme to update model textures in an asymmetrical colour volume that contains a higher number of voxels than that of the geometry volume, (iii) simplifying dense polygon mesh model using quadric-based mesh decimation algorithm, and (iv) creating and mapping 2D textures to every polygon in the output 3D model.
The proposed method is implemented in real-time by means of GPU parallel processing.
Visualization via ray casting of both geometry and colour volumes provides users with a real-time feedback of the currently scanned 3D model.
Experimental results show that the proposed method is capable of keeping the model texture quality even for a heavily decimated model and that, when reconstructing small objects, photorealistic RGB textures can still be reconstructed.
“Hey, connect with me. I’m the craziest/funniest/most popular/successful/most-connected/other-adjective person on LinkedIn.”
Thanks for reaching out but I’ll take a raincheck.
Should I want an endless stream of Instagram drivel, I know where to find you.
This is why it’s always important to look at what an inbound connection has been posting, forwarding and liking before hitting the “Accept” button.
If someone has written a recommendation for you based on how well you can make use of auto-responder scripts to pester your connections with irrelevancy you can pretty much kiss your chances of connecting goodbye.
I just don’t have the patience for that special brand of bullshit that directly sets out of to waste my limited time on this Earth.
Today I read a paper titled “Noise in Structured-Light Stereo Depth Cameras: Modeling and its Applications”
My initial thoughts: I’ve done extensive experiments with the Kinect and one of the things that has always annoyed me is just how darn noisy the RGBD image data is coming off the device. I’ve developed some algorithms that denoise it, but a lot of how well your software works is completely dependent on environmental issues. Light bloom from an outside window can totally throw off your application and render t all but useless during a critical demo (trust me, I know this first hand).
I think I can make use of some of the denoising techniques covered in this paper, combined with a 3D planes segmentation algorithm.
The abstract is:
Depth maps obtained from commercially available structured-light stereo based depth cameras, such as the Kinect, are easy to use but are affected by significant amounts of noise.
This paper is devoted to a study of the intrinsic noise characteristics of such depth maps, i.e.
the standard deviation of noise in estimated depth varies quadratically with the distance of the object from the depth camera.
We validate this theoretical model against empirical observations and demonstrate the utility of this noise model in three popular applications: depth map denoising, volumetric scan merging for 3D modeling, and identification of 3D planes in depth maps.
Today I finished reading “The Small Bachelor” by P.G. Wodehouse
Today I finished reading “Pieces 5: Hellhound-02” by Masamune Shirow
Today I finished reading “The Natural Laws of Business: How to Harness the Power of Evolution, Physics, and Economics to Achieve Business Success” by Richard Koch
Today I read a paper titled “Efficient Upsampling of Natural Images”
The abstract is:
We propose a novel method of efficient upsampling of a single natural image.
Current methods for image upsampling tend to produce high-resolution images with either blurry salient edges, or loss of fine textural detail, or spurious noise artifacts.
In our method, we mitigate these effects by modeling the input image as a sum of edge and detail layers, operating upon these layers separately, and merging the upscaled results in an automatic fashion.
We formulate the upsampled output image as the solution to a non-convex energy minimization problem, and propose an algorithm to obtain a tractable approximate solution.
Our algorithm comprises two main stages.
1) For the edge layer, we use a nonparametric approach by constructing a dictionary of patches from a given image, and synthesize edge regions in a higher-resolution version of the image.
2) For the detail layer, we use a global parametric texture enhancement approach to synthesize detail regions across the image.
We demonstrate that our method is able to accurately reproduce sharp edges as well as synthesize photorealistic textures, while avoiding common artifacts such as ringing and haloing.
In addition, our method involves no training phase or estimation of model parameters, and is easily parallelizable.
We demonstrate the utility of our method on a number of challenging standard test photos.