Today I finished reading “Service With a Smile” by P.G. Wodehouse
Read – The Luck of the Bodkins
Today I finished reading “The Luck of the Bodkins” by P.G. Wodehouse
Read – Piccadilly Jim
Today I finished reading “Piccadilly Jim” by P.G. Wodehouse
Read – Ghost in the Shell 1.5: Human-error Processor
Today I finished reading “Ghost in the Shell 1.5: Human-error Processor” by Masamune Shirow
Read – The Guild: Knights of Good
Today I finished reading “The Guild: Knights of Good” by Felicia Day
Paper – Robust Supervisors for Intersection Collision Avoidance in the Presence of Uncontrolled Vehicles
Today I read a paper titled “Robust Supervisors for Intersection Collision Avoidance in the Presence of Uncontrolled Vehicles”
The abstract is:
We present the design and validation of a centralized controller, called a supervisor, for collision avoidance of multiple human-driven vehicles at a road intersection, considering measurement errors, unmodeled dynamics, and uncontrolled vehicles.
We design the supervisor to be least restrictive, that is, to minimize its interferences with human drivers.
This performance metric is given a precise mathematical form by splitting the design process into two subproblems: verification problem and supervisor-design problem.
The verification problem determines whether an input signal exists that makes controlled vehicles avoid collisions at all future times.
The supervisor is designed such that if the verification problem returns yes, it allows the drivers’ desired inputs; otherwise, it overrides controlled vehicles to prevent collisions.
As a result, we propose exact and efficient supervisors.
The exact supervisor solves the verification problem exactly but with combinatorial complexity.
In contrast, the efficient supervisor solves the verification problem within a quantified approximation bound in polynomially bounded time with the number of controlled vehicles.
We validate the performances of both supervisors through simulation and experimental testing.
Read – Eragon
Today I finished reading “Eragon” by Christopher Paolini
Paper – PolyDepth: Real-time Penetration Depth Computation using Iterative Contact-Space Projection
Today I read a paper titled “PolyDepth: Real-time Penetration Depth Computation using Iterative Contact-Space Projection”
The abstract is:
We present a real-time algorithm that finds the Penetration Depth (PD) between general polygonal models based on iterative and local optimization techniques.
Given an in-collision configuration of an object in configuration space, we find an initial collision-free configuration using several methods such as centroid difference, maximally clear configuration, motion coherence, random configuration, and sampling-based search.
We project this configuration on to a local contact space using a variant of continuous collision detection algorithm and construct a linear convex cone around the projected configuration.
We then formulate a new projection of the in-collision configuration onto the convex cone as a Linear Complementarity Problem (LCP), which we solve using a type of Gauss-Seidel iterative algorithm.
We repeat this procedure until a locally optimal PD is obtained.
Our algorithm can process complicated models consisting of tens of thousands triangles at interactive rates.
Paper – Preprint Virtual Reality Assistant Technology for Learning Primary Geography
Today I read a paper titled “Preprint Virtual Reality Assistant Technology for Learning Primary Geography”
The abstract is:
This is the preprint version of our paper on ICWL2015.
A virtual reality based enhanced technology for learning primary geography is proposed, which synthesizes several latest information technologies including virtual reality(VR), 3D geographical information system(GIS), 3D visualization and multimodal human-computer-interaction (HCI).
The main functions of the proposed system are introduced, i.e.
Buffer analysis, Overlay analysis, Space convex hull calculation, Space convex decomposition, 3D topology analysis and 3D space intersection detection.
The multimodal technologies are employed in the system to enhance the immersive perception of the users.
Studying – Illustrator one-on-one intermediate
This month I am studying “Illustrator one-on-one intermediate”
Upgrading my Illustrator skills for 2016 by learning the in’s and out’s of the new Illustrator CC.
Read – Superconnect
Today I finished reading “Superconnect: Harnessing the Power of Networks and the Strength of Weak Links” by Richard Koch
Paper – Real-time correction of panoramic images using hyperbolic Möbius transformations
Today I read a paper titled “Real-time correction of panoramic images using hyperbolic Möbius transformations”
The abstract is:
Wide-angle images gained a huge popularity in the last years due to the development of computational photography and imaging technological advances.
They present the information of a scene in a way which is more natural for the human eye but, on the other hand, they introduce artifacts such as bent lines.
These artifacts become more and more unnatural as the field of view increases.
In this work, we present a technique aimed to improve the perceptual quality of panorama visualization.
The main ingredients of our approach are, on one hand, considering the viewing sphere as a Riemann sphere, what makes natural the application of M\”obius (complex) transformations to the input image, and, on the other hand, a projection scheme which changes in function of the field of view used.
We also introduce an implementation of our method, compare it against images produced with other methods and show that the transformations can be done in real-time, which makes our technique very appealing for new settings, as well as for existing interactive panorama applications.
Read – Realware
Today I finished reading “Realware” by Rudy Rucker
Paper – Self-propelled Chimeras
Today I read a paper titled “Self-propelled Chimeras”
The abstract is:
We report the appearance of chimera states in a minimal extension of the classical Vicsek model for collective motion of self-propelled particle systems.
Inspired by earlier works on chimera states in the Kuramoto model, we introduce a phase lag parameter in the particle alignment dynamics.
Compared to the oscillatory networks with fixed site positions, the self-propelled particle systems can give rise to distinct forms of chimeras resembling moving flocks through an incoherent surrounding, for which we characterize their parameter domains.
More specifically, we detect localized directional one-headed and multi-headed chimera states, as well as scattered directional chimeras without space localization.
We discuss canonical generalizations of the elementary Vicsek model and show chimera states for them indicating the universality of this novel behavior.
A continuum limit of the particle system is derived that preserves the chimeric behavior.
Read – Complete Stories
Today I finished reading “Complete Stories” by Rudy Rucker
Paper – Real-time 3D scene description using Spheres, Cones and Cylinders
Today I read a paper titled “Real-time 3D scene description using Spheres, Cones and Cylinders”
The abstract is:
The paper describes a novel real-time algorithm for finding 3D geometric primitives (cylinders, cones and spheres) from 3D range data.
In its core, it performs a fast model fitting with a model update in constant time (O(1)) for each new data point added to the model.
We use a three stage approach.The first step inspects 1.5D sub spaces, to find ellipses.
The next stage uses these ellipses as input by examining their neighborhood structure to form sets of candidates for the 3D geometric primitives.
Finally, candidate ellipses are fitted to the geometric primitives.
The complexity for point processing is O(n); additional time of lower order is needed for working on significantly smaller amount of mid-level objects.
This allows the approach to process 30 frames per second on Kinect depth data, which suggests this approach as a pre-processing step for 3D real-time higher level tasks in robotics, like tracking or feature based mapping.
Read – The Purloined Paperweight
Today I finished reading “The Purloined Paperweight” by P.G. Wodehouse
Paper – Lens Factory: Automatic Lens Generation Using Off-the-shelf Components
Today I read a paper titled “Lens Factory: Automatic Lens Generation Using Off-the-shelf Components”
The abstract is:
Custom optics is a necessity for many imaging applications.
Unfortunately, custom lens design is costly (thousands to tens of thousands of dollars), time consuming (10-12 weeks typical lead time), and requires specialized optics design expertise.
By using only inexpensive, off-the-shelf lens components the Lens Factory automatic design system greatly reduces cost and time.
Design, ordering of parts, delivery, and assembly can be completed in a few days, at a cost in the low hundreds of dollars.
Lens design constraints, such as focal length and field of view, are specified in terms familiar to the graphics community so no optics expertise is necessary.
Unlike conventional lens design systems, which only use continuous optimization methods, Lens Factory adds a discrete optimization stage.
This stage searches the combinatorial space of possible combinations of lens elements to find novel designs, evolving simple canonical lens designs into more complex, better designs.
Intelligent pruning rules make the combinatorial search feasible.
We have designed and built several high performance optical systems which demonstrate the practicality of the system.
Read – The Long Utopia
Today I finished reading “The Long Utopia” by Terry Pratchett
Studying – Illustrator one-on-one fundamentals
This month I am studying “Illustrator one-on-one fundamentals”
Illustrator has been completely overhauled since I last took a class in it.
I have been using Illustrator for years, but I am sure there are hidden depths I have yet to explore.
Also, I didn’t have access to the advanced Photoshop class, otherwise I would have been studying that this month.
Read – The Daleth Effect
Today I finished reading “The Daleth Effect” by Harry Harrison
Creative Valleys
First, I shall paraphrase what every article written around this paper is stating: “Creativity peaks during our early 20’s and then again in our 50’s. But let’s focus on the early 20’s.”
Complete waste of time BBC “news” article with irrelevant and unrelated image attached here:
https://www.bbc.com/news/newsbeat-48077012
And the original Ohio State University study here, which you probably don’t want to waste your time reading either:
https://www.nber.org/papers/w11799.pdf
Abstract:
This paper studies life cycle creativity among Nobel laureate economists. We identify two distinct life cycles of scholarly creativity. Experimental innovators work inductively, accumulating knowledge from experience. Conceptual innovators work deductively, applying abstract principles. We find that conceptual innovators do their most important work earlier in their careers than experimental laureates. For instance, our estimates imply that the probability that the most conceptual laureate publishes his single best work peaks at age 25 compared to the mid-50s for the most experimental laureate. Thus while experience benefits experimental innovators, newness to a field benefits conceptual innovators.
Wow!
What an absolute steaming pile of bullshit filtered through the lens of shoddy journalism from a questionable, non-longitudinal study of a limited data set (31 non-participating subjects) that focused on a single data point (citations of a science paper) in a single field (economics) set up by two people who ranked (subjectively) the style of creativity someone demonstrates.
Interestingly, this quote: “…For the most conceptual laureate, the probability of a single best year peaks at age at age 24.8…” indicates a single data point of a single subject that can skew the story we are telling ourself (28.8 was the mean age for the first peak) which is just barely “in our 20’s”. Perhaps a rephrasing to “our late 20’s” might be better.
The results are inconclusive and the conclusion is so littered with “weasel words” like “could” and “may” I honestly thought I was reading a paper written by someone with commitment issues.
The paper also seems to be at odds with many of the papers it cites which state, quite clearly, that creativity is mid-30s to late-40s but also that “creativity” is not governed so much by age but by the absorption into the cultural mindset of the field and also where the person is in their career and their life.
There is a reason why theoretical mathematicians do their “best work” in their 30’s and multiple studies have found it has nothing to do with how creative they actually are.
There’s an awful lot of articles (none of which are linking to the original study but appear to be just parroting each other’s misconceptionw) written around this study, and everyone is throwing away forty years of psychological research in to how creativity works and its peaks and valleys, and quoting this paper as though it is the New Gospel and there are only two points in life where we are now creative. So we’re right back where we started with ageism and erroneously defining “creative peaks.”
This paper should be treated as what it is, another data point in how creativity works. We humans really need to stop the cycle of touting the latest paper as the final answer on a subject.
Creating and creativity to some, is like breathing, they cannot stop even if they wanted too.
Read – The Wanderer
Today I finished reading “The Wanderer” by Fritz Leiber
Paper – Merging of Bézier curves with box constraints
Today I read a paper titled “Merging of Bézier curves with box constraints”
The abstract is:
In this paper, we present a novel approach to the problem of merging of B\’ezier curves with respect to the $L_2$-norm.
We give illustrative examples to show that the solution of the conventional merging problem may not be suitable for further modification and applications.
As in the case of the degree reduction problem, we apply the so-called restricted area approach — proposed recently in (P.
Gospodarczyk, Computer-Aided Design 62 (2015), 143–151) — to avoid certain defects and make the resulting curve more useful.
A method of solving the new problem is based on box-constrained quadratic programming approach.
Read – Stick to Drawing Comics, Monkey Brain!
Today I finished reading “Stick to Drawing Comics, Monkey Brain!” by Scott Adams
Read – The John Varley Reader
Today I finished reading “The John Varley Reader” by John Varley
Paper – Efficient Hill-Climber for Multi-Objective Pseudo-Boolean Optimization
Today I read a paper titled “Efficient Hill-Climber for Multi-Objective Pseudo-Boolean Optimization”
The abstract is:
Local search algorithms and iterated local search algorithms are a basic technique.
Local search can be a stand along search methods, but it can also be hybridized with evolutionary algorithms.
Recently, it has been shown that it is possible to identify improving moves in Hamming neighborhoods for k-bounded pseudo-Boolean optimization problems in constant time.
This means that local search does not need to enumerate neighborhoods to find improving moves.
It also means that evolutionary algorithms do not need to use random mutation as a operator, except perhaps as a way to escape local optima.
In this paper, we show how improving moves can be identified in constant time for multiobjective problems that are expressed as k-bounded pseudo-Boolean functions.
In particular, multiobjective forms of NK Landscapes and Mk Landscapes are considered.
Read – Ukridge
Today I finished reading “Ukridge” by P.G. Wodehouse
Paper – On Avoidance Learning with Partial Observability
Today I read a paper titled “On Avoidance Learning with Partial Observability”
The abstract is:
We study a framework where agents have to avoid aversive signals.
The agents are given only partial information, in the form of features that are projections of task states.
Additionally, the agents have to cope with non-determinism, defined as unpredictability on the way that actions are executed.
The goal of each agent is to define its behavior based on feature-action pairs that reliably avoid aversive signals.
We study a learning algorithm, called A-learning, that exhibits fixpoint convergence, where the belief of the allowed feature-action pairs eventually becomes fixed.
A-learning is parameter-free and easy to implement.
Paper – Light Efficient Flutter Shutter
Today I read a paper titled “Light Efficient Flutter Shutter”
The abstract is:
Flutter shutter is a technique in which the exposure is chopped into segments and light is only integrated part of the time.
By carefully selecting the chopping sequence it is possible to better condition the data for reconstruction problems such as motion deblurring, focal sweeping, and compressed sensing.
The partial exposure trades better conditioning for less energy.
In problems such as motion deblurring the available energy is what caused the problem in the first place (as strong illumination allows short exposure thus eliminates motion blur).
It is still beneficial because the benefit from the better conditioning outweighs the cost in energy.
This documents is focused on light efficient flutter shutter that provides better conditioning and better energy utilization than conventional flutter shutter.
Read – The Breakthrough Principle of 16x
Today I finished reading “The Breakthrough Principle of 16x” by Richard Koch
Read – Teaching What Really Happened
Today I finished reading “Teaching What Really Happened: How to Avoid the Tyranny of Textbooks and Get Students Excited About Doing History” by James W. Loewen
Studying – Photoshop one-on-one intermediate
This month I am studying “Photoshop one-on-one intermediate”
Got through the fundamentals class faster than I expected so spent the remainder of the month just doing more self-directed exercises.
This month, taking it to the next level with the “intermediate” class.
Read – King of the Comics
Today I finished reading “King of the Comics: A Pearls Before Swine Collection” by Stephan Pastis
Paper – A particle filter to reconstruct a free-surface flow from a depth camera
Today I read a paper titled “A particle filter to reconstruct a free-surface flow from a depth camera”
The abstract is:
We investigate the combined use of a Kinect depth sensor and of a stochastic data assimilation method to recover free-surface flows.
More specifically, we use a Weighted ensemble Kalman filter method to reconstruct the complete state of free-surface flows from a sequence of depth images only.
This particle filter accounts for model and observations errors.
This data assimilation scheme is enhanced with the use of two observations instead of one classically.
We evaluate the developed approach on two numerical test cases: a collapse of a water column as a toy-example and a flow in an suddenly expanding flume as a more realistic flow.
The robustness of the method to depth data errors and also to initial and inflow conditions is considered.
We illustrate the interest of using two observations instead of one observation into the correction step, especially for unknown inflow boundary conditions.
Then, the performance of the Kinect sensor to capture temporal sequences of depth observations is investigated.
Finally, the efficiency of the algorithm is qualified for a wave in a real rectangular flat bottom tank.
It is shown that for basic initial conditions, the particle filter rapidly and remarkably reconstructs velocity and height of the free surface flow based on noisy measurements of the elevation alone.
Paper – Bayesian Opponent Exploitation in Imperfect-Information Games
Today I read a paper titled “Bayesian Opponent Exploitation in Imperfect-Information Games”
The abstract is:
The two most fundamental problems in computational game theory are computing a Nash equilibrium and learning to exploit opponents given observations of their play (aka opponent exploitation).
The latter is perhaps even more important than the former: Nash equilibrium does not have a compelling theoretical justification in game classes other than two-player zero-sum, and furthermore for all games one can potentially do better by exploiting perceived weaknesses of the opponent than by following a static equilibrium strategy throughout the match.
The natural setting for opponent exploitation is the Bayesian setting where we have a prior model that is integrated with observations to create a posterior opponent model that we respond to.
The most natural, and a well-studied prior distribution is the Dirichlet distribution.
An exact polynomial-time algorithm is known for best-responding to the posterior distribution for an opponent assuming a Dirichlet prior with multinomial sampling in the case of normal-form games; however, for the case of imperfect-information games the best known algorithm is a sampling algorithm based on approximating an infinite integral without theoretical guarantees.
The main result is the first exact algorithm for accomplishing this in imperfect-information games.
We also present an algorithm for another natural setting where the prior is the uniform distribution over a polyhedron.
Paper – A Real-Time Soft Robotic Patient Positioning System for Maskless Head-and-Neck Cancer Radiotherapy: An Initial Investigation
Today I read a paper titled “A Real-Time Soft Robotic Patient Positioning System for Maskless Head-and-Neck Cancer Radiotherapy: An Initial Investigation”
The abstract is:
We present an initial examination of a novel approach to accurately position a patient during head and neck intensity modulated radiotherapy (IMRT).
Position-based visual-servoing of a radio-transparent soft robot is used to control the flexion/extension cranial motion of a manikin head.
A Kinect RGB-D camera is used to measure head position and the error between the sensed and desired position is used to control a pneumatic system which regulates pressure within an inflatable air bladder (IAB).
Results show that the system is capable of controlling head motion to within 2mm with respect to a reference trajectory.
This establishes proof-of-concept that using multiple IABs and actuators can improve cancer treatment.
Paper – Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks
Today I read a paper titled “Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks”
The abstract is:
This paper presents to the best of our knowledge the first end-to-end object tracking approach which directly maps from raw sensor input to object tracks in sensor space without requiring any feature engineering or system identification in the form of plant or sensor models.
Specifically, our system accepts a stream of raw sensor data at one end and, in real-time, produces an estimate of the entire environment state at the output including even occluded objects.
We achieve this by framing the problem as a deep learning task and exploit sequence models in the form of recurrent neural networks to learn a mapping from sensor measurements to object tracks.
In particular, we propose a learning method based on a form of input dropout which allows learning in an unsupervised manner, only based on raw, occluded sensor data without access to ground-truth annotations.
We demonstrate our approach using a synthetic dataset designed to mimic the task of tracking objects in 2D laser data — as commonly encountered in robotics applications — and show that it learns to track many dynamic objects despite occlusions and the presence of sensor noise.
Read – The Market Square Dog
Today I finished reading “The Market Square Dog” by James Herriot
Read – Human Error Procesor 1.5
Today I finished reading “Human Error Procesor 1.5” by Shirow Masamune
Read – Contagious: Why Things Catch On
Today I finished reading “Contagious: Why Things Catch On” by Jonah Berger
Read – God’s Debris
Today I finished reading “God’s Debris: A Thought Experiment” by Scott Adams
Read – Bill the Conqueror
Today I finished reading “Bill the Conqueror” by P.G. Wodehouse
Paper – 11 x 11 Domineering is Solved: The first player wins
Today I read a paper titled “11 x 11 Domineering is Solved: The first player wins”
The abstract is:
We have developed a program called MUDoS (Maastricht University Domineering Solver) that solves Domineering positions in a very efficient way.
This enables the solution of known positions so far (up to the 10 x 10 board) much quicker (measured in number of investigated nodes).
More importantly, it enables the solution of the 11 x 11 Domineering board, a board up till now far out of reach of previous Domineering solvers.
The solution needed the investigation of 259,689,994,008 nodes, using almost half a year of computation time on a single simple desktop computer.
The results show that under optimal play the first player wins the 11 x 11 Domineering game, irrespective if Vertical or Horizontal starts the game.
In addition, several other boards hitherto unsolved were solved.
Using the convention that Vertical starts, the 8 x 15, 11 x 9, 12 x 8, 12 x 15, 14 x 8, and 17 x 6 boards are all won by Vertical, whereas the 6 x 17, 8 x 12, 9 x 11, and 11 x 10 boards are all won by Horizontal.
Read – Retrograde Summer
Today I finished reading “Retrograde Summer” by John Varley
Studying – Photoshop one-on-one fundamentals
This month I am studying “Photoshop one-on-one fundamentals”
I figured it was about time I actually sat down and updated my Adobe Photoshop skills for the new version.
Expecting this will take me most of the month if I do all of the exercises and any extra exercises too.
Read – Everyone Knows What a Dragon Looks Like
Today I finished reading “Everyone Knows What a Dragon Looks Like” by Jay Williams
Read – Empowered, Volume 7
Today I finished reading “Empowered, Volume 7” by Adam Warren
Read – Intron Depot 7 : Barb Wire 02
Today I finished reading “Intron Depot 7 : Barb Wire 02” by Masamune Shirow
Paper – Autonomous Vehicle Routing in Congested Road Networks
Today I read a paper titled “Autonomous Vehicle Routing in Congested Road Networks”
The abstract is:
This paper considers the problem of routing and rebalancing a shared fleet of autonomous (i.e., self-driving) vehicles providing on-demand mobility within a capacitated transportation network, where congestion might disrupt throughput.
We model the problem within a network flow framework and show that under relatively mild assumptions the rebalancing vehicles, if properly coordinated, do not lead to an increase in congestion (in stark contrast to common belief).
From an algorithmic standpoint, such theoretical insight suggests that the problem of routing customers and rebalancing vehicles can be decoupled, which leads to a computationally-efficient routing and rebalancing algorithm for the autonomous vehicles.
Numerical experiments and case studies corroborate our theoretical insights and show that the proposed algorithm outperforms state-of-the-art point-to-point methods by avoiding excess congestion on the road.
Collectively, this paper provides a rigorous approach to the problem of congestion-aware, system-wide coordination of autonomously driving vehicles, and to the characterization of the sustainability of such robotic systems.