Today I finished reading “Fundamentals of Adventure Game Design” by Ernest Adams
Today I read a paper titled “Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials”
The abstract is:
Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits.
Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties.
However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials.
In this paper, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object.
We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing.
The resulting prints faithfully reproduce colors, color gradients and fine-scale details.
Today I finished reading “The Swords of Lankhmar” by Fritz Leiber
Today I finished reading “Mrs Bradshaw’s Handbook” by Terry Pratchett
Today I read a paper titled “HMM and DTW for evaluation of therapeutical gestures using kinect”
The abstract is:
Automatic recognition of the quality of movement in human beings is a challenging task, given the difficulty both in defining the constraints that make a movement correct, and the difficulty in using noisy data to determine if these constraints were satisfied.
This paper presents a method for the detection of deviations from the correct form in movements from physical therapy routines based on Hidden Markov Models, which is compared to Dynamic Time Warping.
The activities studied include upper an lower limbs movements, the data used comes from a Kinect sensor.
Correct repetitions of the activities of interest were recorded, as well as deviations from these correct forms.
The ability of the proposed approach to detect these deviations was studied.
Results show that a system based on HMM is much more likely to determine if a certain movement has deviated from the specification.
Today I finished reading “The Martian” by Andy Weir
Today I read a paper titled “Landmark-Guided Elastic Shape Analysis of Human Character Motions”
The abstract is:
Motions of virtual characters in movies or video games are typically generated by recording actors using motion capturing methods.
Animations generated this way often need postprocessing, such as improving the periodicity of cyclic animations or generating entirely new motions by interpolation of existing ones.
Furthermore, search and classification of recorded motions becomes more and more important as the amount of recorded motion data grows.
In this paper, we will apply methods from shape analysis to the processing of animations.
More precisely, we will use the by now classical elastic metric model used in shape matching, and extend it by incorporating additional inexact feature point information, which leads to an improved temporal alignment of different animations.
Today I read a paper titled “The History of Mobile Augmented Reality”
The abstract is:
This document summarizes the major milestones in mobile Augmented Reality between 1968 and 2014.
Major parts of the list were compiled by the member of the Christian Doppler Laboratory for Handheld Augmented Reality in 2010 (author list in alphabetical order) for the ISMAR society.
Later in 2013 it was updated, and more recent work was added during preparation of this report.
Permission is granted to copy and modify.
Today I read a paper titled “Simplified Boardgames”
The abstract is:
We formalize Simplified Boardgames language, which describes a subclass of arbitrary board games.
The language structure is based on the regular expressions, which makes the rules easily machine-processable while keeping the rules concise and fairly human-readable.
Today I read a paper titled “Debugging Machine Learning Tasks”
The abstract is:
Unlike traditional programs (such as operating systems or word processors) which have large amounts of code, machine learning tasks use programs with relatively small amounts of code (written in machine learning libraries), but voluminous amounts of data.
Just like developers of traditional programs debug errors in their code, developers of machine learning tasks debug and fix errors in their data.
However, algorithms and tools for debugging and fixing errors in data are less common, when compared to their counterparts for detecting and fixing errors in code.
In this paper, we consider classification tasks where errors in training data lead to misclassifications in test points, and propose an automated method to find the root causes of such misclassifications.
Our root cause analysis is based on Pearl’s theory of causation, and uses Pearl’s PS (Probability of Sufficiency) as a scoring metric.
Our implementation, Psi, encodes the computation of PS as a probabilistic program, and uses recent work on probabilistic programs and transformations on probabilistic programs (along with gray-box models of machine learning algorithms) to efficiently compute PS.
Psi is able to identify root causes of data errors in interesting data sets.
Today I finished reading “Working Effectively with Legacy Code” by Michael C. Feathers
Today I finished reading “Stardust Memories” by Yukinobu Hoshino
This month I am studying “Transforming a photo into a painting with Photoshop”
My technical Photoshop skills are pretty sharp.
My creative Photoshop skills, not so much.
I am always open to learning a new creative technique because I generally suck at them.
Today I finished reading “Clinical Procedures in Emergency Medicine” by James R. Roberts
Today I finished reading “The Stainless Steel Rat Joins the Circus” by Harry Harrison
Today I read a paper titled “Immersive Augmented Reality Training for Complex Manufacturing Scenarios”
The abstract is:
In the complex manufacturing sector a considerable amount of resources are focused on developing new skills and training workers.
In that context, increasing the effectiveness of those processes and reducing the investment required is an outstanding issue.
In this paper we present an experiment that shows how modern Human Computer Interaction (HCI) metaphors such as collaborative mixed-reality can be used to transmit procedural knowledge and could eventually replace other forms of face-to-face training.
We implement a real-time Immersive Augmented Reality (IAR) setup with see-through cameras that allows for collaborative interactions that can simulate conventional forms of training.
The obtained results indicate that people who took the IAR training achieved the same performance than people in the conventional face-to-face training condition.
These results, their implications for future training and the use of HCI paradigms in this context are discussed in this paper.
Today I finished reading “The Practical Princess and Other Liberating Fairy Tales” by Jay Williams
Today I finished reading “Oxford Handbook of Emergency Medicine” by Jonathan Wyatt
Today I finished reading “The Clicking of Cuthbert” by P.G. Wodehouse
Today I read a paper titled “Heat as an inertial force: A quantum equivalence principle”
The abstract is:
The firewall was introduced into black hole evaporation scenarios as a deus ex machina designed to break entanglements and preserve unitarity (Almheiri et.al., 2013).
Here we show that the firewall actually exists and does break entanglements, but only in the context of a virtual reality for observers stationed near the horizon, who are following the long-term evolution of the hole.
These observers are heated by acceleration radiation at the Unruh temperature and see pair creation at the horizon as a high-energy phenomenon.
The objective reality is very different.
We argue that Hawking pair creation is entirely a low-energy process in which entanglements never arise.
The Hawking particles materialize as low-energy excitations with typical wavelength considerably larger than the black hole radius.
They thus emerge into a very non-uniform environment inimical to entanglement-formation.
Today I read a paper titled “Towards Reversible De-Identification in Video Sequences Using 3D Avatars and Steganography”
The abstract is:
We propose a de-identification pipeline that protects the privacy of humans in video sequences by replacing them with rendered 3D human models, hence concealing their identity while retaining the naturalness of the scene.
The original images of humans are steganographically encoded in the carrier image, i.e.
the image containing the original scene and the rendered 3D human models.
We qualitatively explore the feasibility of our approach, utilizing the Kinect sensor and its libraries to detect and localize human joints.
A 3D avatar is rendered into the scene using the obtained joint positions, and the original human image is steganographically encoded in the new scene.
Our qualitative evaluation shows reasonably good results that merit further exploration.
Today I finished reading “The Gypsy Morph” by Terry Brooks
Today I finished reading “The Art of Readable Code” by Dustin Boswell
Good ideas. Many I agree with. Some I very much don’t. But like everything that is opinionated & style-based, what’s the saying? Fashions come and go.
But I will agree with this, a good aesthetic style makes code easily readable. And code is read far more than it is written.
Whether you agree with the contents of the book or not, I think this book should be one of those “required readings” books that every programmer should be made to read at least once in their life.
Today I finished reading “Morgawr” by Terry Brooks
Another Shannara book. And whilst I enjoy the world, I cannot help but feel I’ve been here before. The premise was strong, the series started out well, but it, much like the airship Jerle Shannara, seemed to drift aimlessly at times, awkwardly stumbling from one set piece to another. I’ve got very mixed feelings about this book, on the one hand, the narrative moves fast in this book, unlike the earlier books of the series that are positively glacial at times (lots of scene setting). On the other hand, I got the sense it was moving fast just to move fast.
Brooks, over the years, has become stronger as a writer than I could have ever thought. I cannot read his earlier works, but I am also getting the sense that he is treading old ground at times. One of the things I have always liked about Brooks’ writing though is the fact he is willing to kill his children if the story arc dictates it, and this series has not disappointed.
Today I finished reading “Fundamentals of Sports Game Design” by Ernest Adams.
Adams has been prolific in his writings about game design and all the different niches that they cover, from the generic “game development” to role-playing games and even the more esoteric such the narrative skills required for game design. This was a nice roundup of the rules and design challenges of creating video games that concern, well… it’s in the title, sports games.”
Whilst the book doesn’t get in to the specifics of any particular sports game it has a definite slant to team vs team based sports. And team vs team based sports usually involve a ball or puck of some sort that the players hit around. The book deals with concepts such as physics, injuries and home-field advantages (the psychology of that specific topic at least) that teams may have. Each section doesn’t get bogged down too deeply on each topic, but I also felt that there was more that could have been written, but then, where do you draw that line?
In our modern IP (intellectual property) driven world there’s a brief section on licenses, trademarks and publicity rights. I didn’t feel these few pages were in-depth enough, but then, that said, if you are working on a sports title that is part of a larger ecosystem then your company lawyers are going to be more involved with that aspect of the business anyway.
Overall a fast read through, worth picking up, even (if like me who has sworn to never work on another sports title as long as I live) sports games are not on the horizon for your next few upcoming projects.
P.S. If you say “sports” too many times it no longer sounds like a real word.
This week I am studying “Beast Lighting”
Haven’t had much occasion to play around with the Beast Lighting plug-ins for either Maya or Unity 3D and I felt I was falling behind on both those fronts. Time to correct my oversight. Based on the (on-line, interactive) class content, it appears this will take me 30 or 50 hours of studying to plow through if I do all of the examples. It is supposed to be a three-month class, but I think I can get through it reasonably quickly if I push myself.
Today I read a paper titled “Look-ahead before you leap: end-to-end active recognition by forecasting the effect of motion”
My initial thoughts: A lot of vision recognition systems, especially the simpler ones, are based on static imagery. If recognition of a moving scene is deployed it is rarely predictive and rarely (if ever) takes in to account the motion of the observer. Actually having demonstrable theories about how to deploy vision recognition in an observer system that is itself moving is hugely beneficial in all sorts of field applications.
Update: I’ve read stuff about this previously, in some other papers. Hmmm… need to go back to the cited works to see who these guys are referencing when I get home this evening.
The abstract is:
Visual recognition systems mounted on autonomous moving agents face the challenge of unconstrained data, but simultaneously have the opportunity to improve their performance by moving to acquire new views of test data.
In this work, we first show how a recurrent neural network-based system may be trained to perform end-to-end learning of motion policies suited for the “active recognition” setting.
Further, we hypothesize that active vision requires an agent to have the capacity to reason about the effects of its motions on its view of the world.
To verify this hypothesis, we attempt to induce this capacity in our active recognition pipeline, by simultaneously learning to forecast the effects of the agent’s motions on its internal representation of its cumulative knowledge obtained from all past views.
Results across two challenging datasets confirm both that our end-to-end system successfully learns meaningful policies for active recognition, and that “learning to look ahead” further boosts recognition performance.
Today I read a paper titled “Creating Simplified 3D Models with High Quality Textures”
Initial thoughts: More Kinect research. It’s a fun little toy, and hooking up multiple Kinects together gives me a full 360 degree scanning system with very little occlusion. Have been integrating the Kinect scanners in to some Hololens AR experiments where I process the Kinect data off-board on a workstation, then feed that to the Hololens via a dedicated data server so that I can overlay a transformational image on to the user’s body. Think of it like being able to see yourself wearing an IronMan suit, or that scene in BladeRunner 2049 where the digital waifu overlays herself with the human sex worker for a weird Replicant, hologram, human three way.
Combining the ideas from this paper, with the observer motion predictive ideas from another Kinect paper I am working through opens up a huge opportunity to “body match” the hologram that the HoloLens is displaying with the human user. It is going to take some work, but I think I am on to something here.
This paper is sort of tangentially related because what I am trying to do is scan (without a lot of noise) the user, get high resolution, non-occluded data, process it on a “lots of cores” workstation with a couple of high-end GPUs, and then feed that pre-processed model data in to the HoloLens for real-time holographic overlay and real-time model distortion of the user. Basically, I am making a “digital fat suit/fun house mirror” that mimics the human as they move around.
The abstract is:
This paper presents an extension to the KinectFusion algorithm which allows creating simplified 3D models with high quality RGB textures.
This is achieved through (i) creating model textures using images from an HD RGB camera that is calibrated with Kinect depth camera, (ii) using a modified scheme to update model textures in an asymmetrical colour volume that contains a higher number of voxels than that of the geometry volume, (iii) simplifying dense polygon mesh model using quadric-based mesh decimation algorithm, and (iv) creating and mapping 2D textures to every polygon in the output 3D model.
The proposed method is implemented in real-time by means of GPU parallel processing.
Visualization via ray casting of both geometry and colour volumes provides users with a real-time feedback of the currently scanned 3D model.
Experimental results show that the proposed method is capable of keeping the model texture quality even for a heavily decimated model and that, when reconstructing small objects, photorealistic RGB textures can still be reconstructed.
“Hey, connect with me. I’m the craziest/funniest/most popular/successful/most-connected/other-adjective person on LinkedIn.”
Thanks for reaching out but I’ll take a raincheck.
Should I want an endless stream of Instagram drivel, I know where to find you.
This is why it’s always important to look at what an inbound connection has been posting, forwarding and liking before hitting the “Accept” button.
If someone has written a recommendation for you based on how well you can make use of auto-responder scripts to pester your connections with irrelevancy you can pretty much kiss your chances of connecting goodbye.
I just don’t have the patience for that special brand of bullshit that directly sets out of to waste my limited time on this Earth.
Today I read a paper titled “Noise in Structured-Light Stereo Depth Cameras: Modeling and its Applications”
My initial thoughts: I’ve done extensive experiments with the Kinect and one of the things that has always annoyed me is just how darn noisy the RGBD image data is coming off the device. I’ve developed some algorithms that denoise it, but a lot of how well your software works is completely dependent on environmental issues. Light bloom from an outside window can totally throw off your application and render t all but useless during a critical demo (trust me, I know this first hand).
I think I can make use of some of the denoising techniques covered in this paper, combined with a 3D planes segmentation algorithm.
The abstract is:
Depth maps obtained from commercially available structured-light stereo based depth cameras, such as the Kinect, are easy to use but are affected by significant amounts of noise.
This paper is devoted to a study of the intrinsic noise characteristics of such depth maps, i.e.
the standard deviation of noise in estimated depth varies quadratically with the distance of the object from the depth camera.
We validate this theoretical model against empirical observations and demonstrate the utility of this noise model in three popular applications: depth map denoising, volumetric scan merging for 3D modeling, and identification of 3D planes in depth maps.
Today I finished reading “The Small Bachelor” by P.G. Wodehouse
Today I finished reading “Pieces 5: Hellhound-02” by Masamune Shirow
Today I finished reading “The Natural Laws of Business: How to Harness the Power of Evolution, Physics, and Economics to Achieve Business Success” by Richard Koch
Today I read a paper titled “Efficient Upsampling of Natural Images”
The abstract is:
We propose a novel method of efficient upsampling of a single natural image.
Current methods for image upsampling tend to produce high-resolution images with either blurry salient edges, or loss of fine textural detail, or spurious noise artifacts.
In our method, we mitigate these effects by modeling the input image as a sum of edge and detail layers, operating upon these layers separately, and merging the upscaled results in an automatic fashion.
We formulate the upsampled output image as the solution to a non-convex energy minimization problem, and propose an algorithm to obtain a tractable approximate solution.
Our algorithm comprises two main stages.
1) For the edge layer, we use a nonparametric approach by constructing a dictionary of patches from a given image, and synthesize edge regions in a higher-resolution version of the image.
2) For the detail layer, we use a global parametric texture enhancement approach to synthesize detail regions across the image.
We demonstrate that our method is able to accurately reproduce sharp edges as well as synthesize photorealistic textures, while avoiding common artifacts such as ringing and haloing.
In addition, our method involves no training phase or estimation of model parameters, and is easily parallelizable.
We demonstrate the utility of our method on a number of challenging standard test photos.
Today I finished reading “Empowered, Volume 8” by Adam Warren
Today I read a paper titled “Universal Coating for Programmable Matter”
The abstract is:
The idea behind universal coating is to have a thin layer of a specific substance covering an object of any shape so that one can measure a certain condition (like temperature or cracks) at any spot on the surface of the object without requiring direct access to that spot.
We study the universal coating problem in the context of self-organizing programmable matter consisting of simple computational elements, called particles, that can establish and release bonds and can actively move in a self-organized way.
Based on that matter, we present a worst-case work-optimal universal coating algorithm that uniformly coats any object of arbitrary shape and size that allows a uniform coating.
Our particles are anonymous, do not have any global information, have constant-size memory, and utilize only local interactions.
Today I read a paper titled “Evolving Shepherding Behavior with Genetic Programming Algorithms”
The abstract is:
We apply genetic programming techniques to the `shepherding’ problem, in which a group of one type of animal (sheep dogs) attempts to control the movements of a second group of animals (sheep) obeying flocking behavior.
Our genetic programming algorithm evolves an expression tree that governs the movements of each dog.
The operands of the tree are hand-selected features of the simulation environment that may allow the dogs to herd the sheep effectively.
The algorithm uses tournament-style selection, crossover reproduction, and a point mutation.
We find that the evolved solutions generalize well and outperform a (naive) human-designed algorithm.
Today I finished reading “Skip School, Fly to Space: A Pearls Before Swine Collection” by Stephan Pastis
Today I read a paper titled “Artificial Persuasion in Pedagogical Games”
The abstract is:
A Persuasive Teachable Agent (PTA) is a special type of Teachable Agent which incorporates a persuasion theory in order to provide persuasive and more personalized feedback to the student.
By employing the persuasion techniques, the PTA seeks to maintain the student in a high motivation and high ability state in which he or she has higher cognitive ability and his or her changes in attitudes are more persistent.
However, the existing model of the PTA still has a few limitations.
Firstly, the existing PTA model focuses on modelling the PTA’s ability to persuade, while does not model its ability to be taught by the student and to practice the knowledge it has learnt.
Secondly, the quantitative model for computational processes in the PTA has low reusability.
Thirdly, there is still a gap between theoretical models and practical implementation of the PTA.
To address these three limitations, this book proposes an improved agent model which follows a goal-oriented approach and models the PTA in its totality by integrating the Persuasion Reasoning of the PTA with the Teachability Reasoning and the Practicability Reasoning.
The project also proposes a more abstract and generalized quantitative model for the computations in the PTA.
With higher level of abstraction, the reusability of the quantitative model is also improved.
New system architecture is introduced to bridge the gap between theoretical models and implementation of the PTA.
Today I finished reading “Motivation” by Brian Tracy
Today I finished reading “Love Among the Chickens” by P.G. Wodehouse
Today I finished reading “Empowered Unchained Volume 1” by Adam Warren
Today I finished reading “The Man Upstairs and Other Stories” by P.G. Wodehouse
This week I am studying “Texturing in Substance Designer”
I admit I am a terrible texturer and texture designer, and I don’t think I will ever be anything but terrible, but I am willing to at least try.
Today I read a paper titled “Query-Efficient Imitation Learning for End-to-End Autonomous Driving”
The abstract is:
One way to approach end-to-end autonomous driving is to learn a policy function that maps from a sensory input, such as an image frame from a front-facing camera, to a driving action, by imitating an expert driver, or a reference policy.
This can be done by supervised learning, where a policy function is tuned to minimize the difference between the predicted and ground-truth actions.
A policy function trained in this way however is known to suffer from unexpected behaviours due to the mismatch between the states reachable by the reference policy and trained policy functions.
More advanced algorithms for imitation learning, such as DAgger, addresses this issue by iteratively collecting training examples from both reference and trained policies.
These algorithms often requires a large number of queries to a reference policy, which is undesirable as the reference policy is often expensive.
In this paper, we propose an extension of the DAgger, called SafeDAgger, that is query-efficient and more suitable for end-to-end autonomous driving.
We evaluate the proposed SafeDAgger in a car racing simulator and show that it indeed requires less queries to a reference policy.
We observe a significant speed up in convergence, which we conjecture to be due to the effect of automated curriculum learning.
Today I read a paper titled “Dynamic Bayesian Networks to simulate occupant behaviours in office buildings related to indoor air quality”
The abstract is:
This paper proposes a new general approach based on Bayesian networks to model the human behaviour.
This approach represents human behaviour with probabilistic cause-effect relations based on knowledge, but also with conditional probabilities coming either from knowledge or deduced from observations.
This approach has been applied to the co-simulation of the CO2 concentration in an office coupled with human behaviour.
Today I finished reading “The Head of Kay’s” by P.G. Wodehouse
Today I read a paper titled “Egocentric Field-of-View Localization Using First-Person Point-of-View Devices”
The abstract is:
We present a technique that uses images, videos and sensor data taken from first-person point-of-view devices to perform egocentric field-of-view (FOV) localization.
We define egocentric FOV localization as capturing the visual information from a person’s field-of-view in a given environment and transferring this information onto a reference corpus of images and videos of the same space, hence determining what a person is attending to.
Our method matches images and video taken from the first-person perspective with the reference corpus and refines the results using the first-person’s head orientation information obtained using the device sensors.
We demonstrate single and multi-user egocentric FOV localization in different indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions.
An interesting claim “We developed this helicopter in just six months.”
Designed? Or developed?
I suspect that certain aspects of the helicopter were designed and developed 10 times faster.
Or even faster than that.
But then many other aspects of the development took just as long.
The video claims, and we can assume based on other statements made by companies over the years that a modern helicopter takes five years of development work
Are we to presume that the entire endeavour was truly done in just six months?
This is their claim.
I don’t think it holds water.
Yes, I suspect the cockpit layout and some of the outer skin was designed in just six months.
VR speeds up the process of that because you don’t have to “cut metal” and then assemble everything.
You can figure out “does this part of the fuselage block the pilot’s view? Should we move it back a little?”
An IDE (Interactive Development Environment), an electronics circuit simulator, a 3D printer, Maya, Mudbox, ZBrush, Photoshop, and all of APIs and frameworks and extensions, along with many other tools that we have today enable us to perform magic that 40 years ago when I started my career would have been considered next to impossible.
To quote myself, “We could not build a modern computer or even the software to run on that modern computer fifty years ago because we did not have the tools to design the tools that would build the tools that would make the computer that would let us write the software.”
VR is an enabling technology.
I think VR and AR are going to change the landscape of how we design products.
It will enable engineers and designers to create products we cannot even yet dream of. VR, like all our other tools of wonder, will allow us to do ten times more things in the same amount of time it would take to do just one thing, and yes, somethings will be done ten times faster, but our pace of development, procurement, approval, and processes, no matter how fantastic the tool, will still have certain inherent limitations until we change those processes as well.
Shout out to Unity3D at 1:13 in the above video.
Today I read a paper titled “The Computational Power of Dynamic Bayesian Networks”
The abstract is:
This paper considers the computational power of constant size, dynamic Bayesian networks.
Although discrete dynamic Bayesian networks are no more powerful than hidden Markov models, dynamic Bayesian networks with continuous random variables and discrete children of continuous parents are capable of performing Turing-complete computation.
With modified versions of existing algorithms for belief propagation, such a simulation can be carried out in real time.
This result suggests that dynamic Bayesian networks may be more powerful than previously considered.
Relationships to causal models and recurrent neural networks are also discussed.