Today I finished reading “Wards of Faerie” by Terry Brooks
Read – Lies My Teacher Told Me About Christopher Columbus
Today I finished reading “Lies My Teacher Told Me About Christopher Columbus: What Your History Books Got Wrong” by James W. Loewen
Paper – On using the Microsoft Kinect$^{\rm TM}$ sensors in the analysis of human motion
Today I read a paper titled “On using the Microsoft Kinect$^{\rm TM}$ sensors in the analysis of human motion”
The abstract is:
The present paper aims at providing the theoretical background required for investigating the use of the Microsoft Kinect$^{\rm TM}$ (`Kinect’, for short) sensors (original and upgraded) in the analysis of human motion.
Our methodology is developed in such a way that its application be easily adaptable to comparative studies of other systems used in capturing human-motion data.
Our future plans include the application of this methodology to two situations: first, in a comparative study of the performance of the two Kinect sensors; second, in pursuing their validation on the basis of comparisons with a marker-based system (MBS).
One important feature in our approach is the transformation of the MBS output into Kinect-output format, thus enabling the analysis of the measurements, obtained from different systems, with the same software application, i.e., the one we use in the analysis of Kinect-captured data; one example of such a transformation, for one popular marker-placement scheme (`Plug-in Gait’), is detailed.
We propose that the similarity of the output, obtained from the different systems, be assessed on the basis of the comparison of a number of waveforms, representing the variation within the gait cycle of quantities which are commonly used in the modelling of the human motion.
The data acquisition may involve commercially-available treadmills and a number of velocity settings: for instance, walking-motion data may be acquired at $5$ km/h, running-motion data at $8$ and $11$ km/h.
We recommend that particular attention be called to systematic effects associated with the subject’s knee and lower leg, as well as to the ability of the Kinect sensors in reliably capturing the details in the asymmetry of the motion for the left and right parts of the human body.
The previous versions of the study have been withdrawn due to the use of a non-representative database.
Read – The Gold Bat
Today I finished reading “The Gold Bat” by P.G. Wodehouse
Read – Wodehouse At The Wicket
Today I finished reading “Wodehouse At The Wicket: A Cricketing Anthology” by P.G. Wodehouse
Read – Pieces 4: Hellhound-01
Today I finished reading “Pieces 4: Hellhound-01” by Masamune Shirow
Paper – Neural Language Correction with Character-Based Attention
Today I read a paper titled “Neural Language Correction with Character-Based Attention”
The abstract is:
Natural language correction has the potential to help language learners improve their writing skills.
While approaches with separate classifiers for different error types have high precision, they do not flexibly handle errors such as redundancy or non-idiomatic phrasing.
On the other hand, word and phrase-based machine translation methods are not designed to cope with orthographic errors, and have recently been outpaced by neural models.
Motivated by these issues, we present a neural network-based approach to language correction.
The core component of our method is an encoder-decoder recurrent neural network with an attention mechanism.
By operating at the character level, the network avoids the problem of out-of-vocabulary words.
We illustrate the flexibility of our approach on dataset of noisy, user-generated text collected from an English learner forum.
When combined with a language model, our method achieves a state-of-the-art $F_{0.5}$-score on the CoNLL 2014 Shared Task.
We further demonstrate that training the network on additional data with synthesized errors can improve performance.
Read – Maximum Ride #10
Today I finished reading “Maximum Ride #10” by James Patterson
Paper – Second Life Physics: Virtual, real, or surreal?
Today I read a paper titled “Second Life Physics: Virtual, real, or surreal?”
The abstract is:
Science teaching detached itself from reality and became restricted to the classrooms and textbooks with their overreliance on standardized and repetitive exercises, while students keep their own alternative conceptions.
Papert, displeased with this inefficient learning process, championed physics microworlds, where students could experience a variety of laws of motion, from Aristotle to Newton and Einstein or even new laws invented by the students themselves.
While often mistakenly seen as a game, Second Life (SL), the online 3-D virtual world hosted by Linden Lab, imposes essentially no rules on the residents beyond reasonable restrictions on improper behavior and the physical rules that guarantee its similitude to the real world.
As a consequence, SL qualifies itself as an environment for personal discovery and exploration as proposed by constructivist theories.
The physical laws are implemented through the well-known physics engine Havok, whose design aims to provide game-players a consistent, realistic environment.
The Havok User Guide (2008) explicitly encourages developers to use several tricks to cheat the simulator in order to make games funnier or easier to play.
As it is shown in this study, SL physics is unexpectedly neither the Newtonian idealized physics nor a real world physics virtualization, intentionally diverging from reality in such a way that it could be called hyper-real.
As a matter of fact, if some of its features make objects behave more realistically than real ones, certain quantities like energy have a totally different meaning in SL as compared to physics.
Far from considering it as a problem, however, the author argues that its hyper-reality may be a golden teaching opportunity, allowing surreal physics simulations and epistemologically rich classroom discussions around the what is a physical law? issue, in accordance with Papert’s never-implemented proposal.
Paper – 3D ShapeNets: A Deep Representation for Volumetric Shapes
Today I read a paper titled “3D ShapeNets: A Deep Representation for Volumetric Shapes”
The abstract is:
3D shape is a crucial but heavily underutilized cue in today’s computer vision systems, mostly due to the lack of a good generic shape representation.
With the recent availability of inexpensive 2.5D depth sensors (e.g.
Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop.
Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding.
To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network.
Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically.
It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning.
To train our 3D deep learning model, we construct ModelNet — a large-scale 3D CAD model dataset.
Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.
Read – Enter Jeeves
Today I finished reading “Enter Jeeves: 15 Early Stories” by P.G. Wodehouse
Read – The Atlas of Emergency Medicine
Today I finished reading “The Atlas of Emergency Medicine, Third Edition” by Kevin Knoop
Read – Jeeves and the Unbidden Guest
Today I finished reading “Jeeves and the Unbidden Guest” by P.G. Wodehouse
Paper – Violation of classical inequalities by photon frequency-filtering
Today I read a paper titled “Violation of classical inequalities by photon frequency-filtering”
The abstract is:
The violation of the Cauchy-Schwarz and Bell inequalities ranks among the major evidences of the genuinely quantum nature of an emitter.
We show that by dispensing from the usual approximation of mode correlations and studying directly correlations between the physical reality -the photons- these violations can be optimized.
This is achieved by extending the concept of photon correlations to all frequencies in all the possible windows of detections, with no prejudice to the supposed origin of the photons.
We identify the regions of quantum emission as rooted in collective de-excitation involving virtual states instead of, as previously assumed, cascaded transitions between real states.
Read – In Joy Still Felt
Today I finished reading “In Joy Still Felt: The Autobiography, 1954-1978” by Isaac Asimov
Paper – Curve Networks for Surface Reconstruction
Today I read a paper titled “Curve Networks for Surface Reconstruction”
The abstract is:
Man-made objects usually exhibit descriptive curved features (i.e., curve networks).
The curve network of an object conveys its high-level geometric and topological structure.
We present a framework for extracting feature curve networks from unstructured point cloud data.
Our framework first generates a set of initial curved segments fitting highly curved regions.
We then optimize these curved segments to respect both data fitting and structural regularities.
Finally, the optimized curved segments are extended and connected into curve networks using a clustering method.
To facilitate effectiveness in case of severe missing data and to resolve ambiguities, we develop a user interface for completing the curve networks.
Experiments on various imperfect point cloud data validate the effectiveness of our curve network extraction framework.
We demonstrate the usefulness of the extracted curve networks for surface reconstruction from incomplete point clouds.
Read – Arm of the Law
Today I finished reading “Arm of the Law” by Harry Harrison
Read – The Graveyard Book
Today I finished reading “The Graveyard Book” by Neil Gaiman
Studying – Creating realistic 3D portraits
This month I am studying “Creating realistic 3D portraits”
An online class with a bunch of pre-recorded video and some exercises to work through.
Read – Remember, Remember (The Fifth Of November)
Today I finished reading “Remember, Remember (The Fifth Of November): The History Of Britain In Bite Sized Chunks” by Judy Parkinson
Read – Casual and Social Games: Advanced Game Design
Today I finished reading “Casual and Social Games: Advanced Game Design” by Ernest Adams
Paper – Managing Overstaying Electric Vehicles in Park-and-Charge Facilities
Today I read a paper titled “Managing Overstaying Electric Vehicles in Park-and-Charge Facilities”
The abstract is:
With the increase in adoption of Electric Vehicles (EVs), proper utilization of the charging infrastructure is an emerging challenge for service providers.
Overstaying of an EV after a charging event is a key contributor to low utilization.
Since overstaying is easily detectable by monitoring the power drawn from the charger, managing this problem primarily involves designing an appropriate penalty during the overstaying period.
Higher penalties do discourage overstaying; however, due to uncertainty in parking duration, less people would find such penalties acceptable, leading to decreased utilization (and revenue).
To analyze this central trade-off, we develop a novel framework that integrates models for realistic user behavior into queueing dynamics to locate the optimal penalty from the points of view of utilization and revenue, for different values of the external charging demand.
Next, when the model parameters are unknown, we show how an online learning algorithm, such as UCB, can be adapted to learn the optimal penalty.
Our experimental validation, based on charging data from London, shows that an appropriate penalty can increase both utilization and revenue while significantly reducing overstaying events.
Paper – The physics of volume rendering
Today I read a paper titled “The physics of volume rendering”
The abstract is:
Radiation transfer is an important topic in several physical disciplines, probably most prominently in astrophysics.
Computer scientists use radiation transfer, among other things, for the visualisation of complex data sets with direct volume rendering.
In this note, I point out the connection between physical radiation transfer and volume rendering, and I describe an implementation of direct volume rendering in the astrophysical radiation transfer code RADMC-3D.
I show examples for the use of this module on analytical models and simulation data.
Paper – Augmented Reality Oculus Rift
Today I read a paper titled “Augmented Reality Oculus Rift”
The abstract is:
This paper covers the whole process of developing an Augmented Reality Stereoscopig Render Engine for the Oculus Rift.
To capture the real world in form of a camera stream, two cameras with fish-eye lenses had to be installed on the Oculus Rift DK1 hardware.
The idea was inspired by Steptoe \cite{steptoe2014presence}.
After the introduction, a theoretical part covers all the most neccessary elements to achieve an AR System for the Oculus Rift, following the implementation part where the code from the AR Stereo Engine is explained in more detail.
A short conclusion section shows some results, reflects some experiences and in the final chapter some future works will be discussed.
The project can be accessed via the git repository this https URL .
Paper – Learning Physical Intuition of Block Towers by Example
Today I read a paper titled “Learning Physical Intuition of Block Towers by Example”
The abstract is:
Wooden blocks are a common toy for infants, allowing them to develop motor skills and gain intuition about the physical behavior of the world.
In this paper, we explore the ability of deep feed-forward models to learn such intuitive physics.
Using a 3D game engine, we create small towers of wooden blocks whose stability is randomized and render them collapsing (or remaining upright).
This data allows us to train large convolutional network models which can accurately predict the outcome, as well as estimating the block trajectories.
The models are also able to generalize in two important ways: (i) to new physical scenarios, e.g.
towers with an additional block and (ii) to images of real wooden blocks, where it obtains a performance comparable to human subjects.
Paper – Towards Verified Artificial Intelligence
Today I read a paper titled “Towards Verified Artificial Intelligence”
The abstract is:
Verified artificial intelligence (AI) is the goal of designing AI-based systems that are provably correct with respect to mathematically-specified requirements.
This paper considers Verified AI from a formal methods perspective.
We describe five challenges for achieving Verified AI, and five corresponding principles for addressing these challenges.
Read – The Phantom of Kansas
Today I finished reading “The Phantom of Kansas” by John Varley
Read – Emergency Medicine Manual
Today I finished reading “Emergency Medicine Manual” by O. John Ma
Read – Hero from Otherwhere
Today I finished reading “Hero from Otherwhere” by Jay Williams
Paper – Effects of Coupling in Human-Virtual Agent Body Interaction
Today I read a paper titled “Effects of Coupling in Human-Virtual Agent Body Interaction”
The abstract is:
This paper presents a study of the dynamic coupling between a user and a virtual character during body interaction.
Coupling is directly linked with other dimensions, such as co-presence, engagement, and believability, and was measured in an experiment that allowed users to describe their subjective feelings about those dimensions of interest.
The experiment was based on a theatrical game involving the imitation of slow upper-body movements and the proposal of new movements by the user and virtual agent.
The agent’s behaviour varied in autonomy: the agent could limit itself to imitating the user’s movements only, initiate new movements, or combine both behaviours.
After the game, each participant completed a questionnaire regarding their engagement in the interaction, their subjective feeling about the co-presence of the agent, etc.
Based on four main dimensions of interest, we tested several hypotheses against our experimental results, which are discussed here.
Read – Atlas of Human Anatomy
Today I finished reading “Atlas of Human Anatomy” by Frank Netter
Paper – Leading birds by their beaks: the response of flocks to external perturbations
Today I read a paper titled “Leading birds by their beaks: the response of flocks to external perturbations”
The abstract is:
We study the asymptotic response of polar ordered active fluids (“flocks”) to small external aligning fields $h$.
The longitudinal susceptibility $\chi_{_\parallel}$ diverges, in the thermodynamic limit, like $h^{-\nu}$ as $h \rightarrow 0$.
In finite systems of linear size $L$, $\chi_{_\parallel}$ saturates to a value $\sim L^\gamma$.
The universal exponents $\nu$ and $\gamma$ depend only on the spatial dimensionality $d$, and are related to the dynamical exponent $z$ and the “roughness exponent” $\alpha$ characterizing the unperturbed flock dynamics.
Using a well supported conjecture for the values of these two exponents, we obtain $\nu = 2/3$, $\gamma = 4/5$ in $d = 2$ and $\nu = 1/4$, $\gamma = 2/5$ in $d = 3$.
These values are confirmed by our simulations.
Read – The Elves of Cintra
Today I finished reading “The Elves of Cintra” by Terry Brooks
Read – Fundamentals of Shooter Game Design
Today I finished reading “Fundamentals of Shooter Game Design” by Ernest Adams
Studying – Creating retro futuristic illustrations with Photoshop
This month I am studying “Creating retro futuristic illustrations with Photoshop”
Bit of a short course. 15 hours of pre-recorded video and some exercise files.
Paper – End to End Learning for Self-Driving Cars
Today I read a paper titled “End to End Learning for Self-Driving Cars”
The abstract is:
We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands.
This end-to-end approach proved surprisingly powerful.
With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways.
It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads.
The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal.
We never explicitly trained it to detect, for example, the outline of roads.
Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously.
We argue that this will eventually lead to better performance and smaller systems.
Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection.
Such criteria understandably are selected for ease of human interpretation which doesn’t automatically guarantee maximum system performance.
Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps.
We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive.
The system operates at 30 frames per second (FPS).
Read – Usagi Yojimbo Volume 28: Red Scorpion
Today I finished reading “Usagi Yojimbo Volume 28: Red Scorpion” by Stan Sakai
Read – Bachelors Anonymous
Today I finished reading “Bachelors Anonymous” by P.G. Wodehouse
Read – The Good Psychopath’s Guide to Success
Today I finished reading “The Good Psychopath’s Guide to Success” by Andy McNab
Read – Jarka Ruus
Today I finished reading “Jarka Ruus” by Terry Brooks
Read – Fundamentals of Vehicle Simulation Design
Today I finished reading “Fundamentals of Vehicle Simulation Design” by Ernest Adams
Read – American Connections
Today I finished reading “American Connections: The Founding Fathers. Networked.” by James Burke
Read – Distrust That Particular Flavor
Today I finished reading “Distrust That Particular Flavor” by William Gibson
Paper – Complexity of Shift Bribery in Committee Elections
Today I read a paper titled “Complexity of Shift Bribery in Committee Elections”
The abstract is:
We study the (parameterized) complexity of SHIFT BRIBERY for multiwinner voting rules.
We focus on SNTV, Bloc, k-Borda, and Chamberlin-Courant, as well as on approximate variants of Chamberlin-Courant, since the original rule is NP-hard to compute.
We show that SHIFT BRIBERY tends to be significantly harder in the multiwinner setting than in the single-winner one by showing settings where SHIFT BRIBERY is easy in the single-winner cases, but is hard (and hard to approximate) in the multiwinner ones.
Moreover, we show that the non-monotonicity of those rules which are based on approximation algorithms for the Chamberlin-Courant rule sometimes affects the complexity of SHIFT BRIBERY.
Studying – Creating a technical illustration cutaway
This month I am studying “Creating a technical illustration cutaway”
In-person workshop/class with instructor feedback on work.
I looooove maps.
Maps and cutaway illustrations have always fascinated me.
Now I get to spend an entire month learning how cutaway illustrations are put together and designing some of my own.
Read – The 80/20 Principle and 92 Other Power Laws of Nature
Today I finished reading “The 80/20 Principle and 92 Other Power Laws of Nature: The Science of Success” by Richard Koch
Paper – Self-Assembling Systems are Distributed Systems
Today I read a paper titled “Self-Assembling Systems are Distributed Systems”
The abstract is:
In 2004, Klavins et al.
introduced the use of graph grammars to describe — and to program — systems of self-assembly.
We show that these graph grammars can be embedded in a graph rewriting characterization of distributed systems that was proposed by Degano and Montanari over twenty years ago.
We apply this embedding to generalize Soloveichik and Winfree’s local determinism criterion (for achieving a unique terminal assembly), from assembly systems of 4-sided tiles that embed in the plane, to arbitrary graph assembly systems.
We present a partial converse of the embedding result, by providing sufficient conditions under which systems of distributed processors can be simulated by graph assembly systems topologically, in the plane, and in 3-space.
We conclude by defining a new complexity measure: “surface cost” (essentially the convex hull of the space inhabited by agents at the conclusion of a self-assembled computation).
We show that, for growth-bounded graphs, executing a subroutine to find a Maximum Independent Set only increases the surface cost of a self-assembling computation by a constant factor.
We obtain this complexity bound by using the simulation results to import the distributed computing notions of “local synchronizer” and “deterministic coin flipping” into self-assembly.
Read – Superheroes
Today I finished reading “Superheroes” by John Varley
Read – Jeeves and the Impending Doom
Today I finished reading “Jeeves and the Impending Doom” by P.G. Wodehouse
Read – The 80/20 Manager
Today I finished reading “The 80/20 Manager: Ten ways to become a great leader” by Richard Koch