This month I am studying “Fundamentals of manga digital illustration”
Somebody needs to think about this stuff...
by justin
This month I am studying “Fundamentals of manga digital illustration”
by justin
Today I read a paper titled “Region Based Approximation for High Dimensional Bayesian Network Models”
The abstract is:
Performing efficient inference on Bayesian Networks (BNs), with large numbers of densely connected variables is challenging.
With exact inference methods, such as the Junction Tree algorithm, clustering complexity can grow exponentially with the number of nodes and so computation becomes intractable.
This paper presents a general purpose approximate inference algorithm called Triplet Region Construction (TRC) that reduces the clustering complexity for factorized models from worst case exponential to polynomial.
We employ graph factorization to reduce connection complexity and produce clusters of limited size.
Unlike MCMC algorithms TRC is guaranteed to converge and we present experiments that show that TRC achieves accurate results when compared with exact solutions.
by justin
Have been doing a lot of optimization work for a couple of clients lately and a large majority of my billable hours break down in to “staring at a progress bar”, “figuring out why a progress bar isn’t moving fast enough” and “figuring out why the progress bar no longer moves.”
by justin
This month I am studying “Photoshop one-on-one advanced”
by justin
Today I read a paper titled “A Practical Approach to Spatiotemporal Data Compression”
The abstract is:
Datasets representing the world around us are becoming ever more unwieldy as data volumes grow.
This is largely due to increased measurement and modelling resolution, but the problem is often exacerbated when data are stored at spuriously high precisions.
In an effort to facilitate analysis of these datasets, computationally intensive calculations are increasingly being performed on specialised remote servers before the reduced data are transferred to the consumer.
Due to bandwidth limitations, this often means data are displayed as simple 2D data visualisations, such as scatter plots or images.
We present here a novel way to efficiently encode and transmit 4D data fields on-demand so that they can be locally visualised and interrogated.
This nascent “4D video” format allows us to more flexibly move the boundary between data server and consumer client.
However, it has applications beyond purely scientific visualisation, in the transmission of data to virtual and augmented reality.
by justin
Today I read a paper titled “Research Priorities for Robust and Beneficial Artificial Intelligence”
The abstract is:
Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to investigate how to maximize these benefits while avoiding potential pitfalls.
This article gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.
by justin
This month I am studying “Illustrator CC one-on-one advanced”
Ah, finally found the advanced Illustrator course I was looking for. Time to wrap this up completely.
by justin
Today I read a paper titled “A Collaborative Untethered Virtual Reality Environment for Interactive Social Network Visualization”
The abstract is:
The increasing prevalence of Virtual Reality technologies as a platform for gaming and video playback warrants research into how to best apply the current state of the art to challenges in data visualization.
Many current VR systems are noncollaborative, while data analysis and visualization is often a multi-person process.
Our goal in this paper is to address the technical and user experience challenges that arise when creating VR environments for collaborative data visualization.
We focus on the integration of multiple tracking systems and the new interaction paradigms that this integration can enable, along with visual design considerations that apply specifically to collaborative network visualization in virtual reality.
We demonstrate a system for collaborative interaction with large 3D layouts of Twitter friend/follow networks.
The system is built by combining a ‘Holojam’ architecture (multiple GearVR Headsets within an OptiTrack motion capture stage) and Perception Neuron motion suits, to offer an untethered, full-room multi-person visualization experience.
by justin
Today I finished reading “Hylozoic” by Rudy Rucker
by justin
Today I finished reading “Usagi Yojimbo #30: Thieves and Spies” by Stan Sakai
by justin
Today I finished reading “Fundamentals of Puzzle and Casual Game Design” by Ernest Adams
by justin
Today I finished reading “The Last Dark” by Stephen R. Donaldson
by justin
Today I finished reading “Pieces 7: Hellhound 01 & 02” by Masamune Shirow
by justin
Today I read a paper titled “Empath: Understanding Topic Signals in Large-Scale Text”
The abstract is:
Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them.
We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like “bleed” and “punch” to generate the category violence).
Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction.
Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter.
Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media.
We show that Empath’s data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.
by justin
This month I am studying “Illustrator one-on-one mastery”
Finding the one-on-one classes a bit easy, probably because I already know Illustrator fairly well.
by justin
Today I read a paper titled “Do You See What I Mean? Visual Resolution of Linguistic Ambiguities”
The abstract is:
Understanding language goes hand in hand with the ability to integrate complex contextual information obtained via perception.
In this work, we present a novel task for grounded language understanding: disambiguating a sentence given a visual scene which depicts one of the possible interpretations of that sentence.
To this end, we introduce a new multimodal corpus containing ambiguous sentences, representing a wide range of syntactic, semantic and discourse ambiguities, coupled with videos that visualize the different interpretations for each sentence.
We address this task by extending a vision model which determines if a sentence is depicted by a video.
We demonstrate how such a model can be adjusted to recognize different interpretations of the same underlying sentence, allowing to disambiguate sentences in a unified fashion across the different ambiguity types.
by justin
Today I finished reading “Pieces 8: Wild Wet West” by Masamune Shirow
by justin
Today I read a paper titled “The GPU-based Parallel Ant Colony System”
The abstract is:
The Ant Colony System (ACS) is, next to Ant Colony Optimization (ACO) and the MAX-MIN Ant System (MMAS), one of the most efficient metaheuristic algorithms inspired by the behavior of ants.
In this article we present three novel parallel versions of the ACS for the graphics processing units (GPUs).
To the best of our knowledge, this is the first such work on the ACS which shares many key elements of the ACO and the MMAS, but differences in the process of building solutions and updating the pheromone trails make obtaining an efficient parallel version for the GPUs a difficult task.
The proposed parallel versions of the ACS differ mainly in their implementations of the pheromone memory.
The first two use the standard pheromone matrix, and the third uses a novel selective pheromone memory.
Computational experiments conducted on several Travelling Salesman Problem (TSP) instances of sizes ranging from 198 to 2392 cities showed that the parallel ACS on Nvidia Kepler GK104 GPU (1536 CUDA cores) is able to obtain a speedup up to 24.29x vs the sequential ACS running on a single core of Intel Xeon E5-2670 CPU.
The parallel ACS with the selective pheromone memory achieved speedups up to 16.85x, but in most cases the obtained solutions were of significantly better quality than for the sequential ACS.
by justin
Today I finished reading “A Gentleman of Leisure” by P.G. Wodehouse
by justin
Today I finished reading “Data Science from Scratch: First Principles with Python” by Joel Grus
by justin
Today I finished reading “Service With a Smile” by P.G. Wodehouse
by justin
Today I finished reading “The Luck of the Bodkins” by P.G. Wodehouse
by justin
Today I finished reading “Piccadilly Jim” by P.G. Wodehouse
by justin
Today I finished reading “Ghost in the Shell 1.5: Human-error Processor” by Masamune Shirow
by justin
Today I finished reading “The Guild: Knights of Good” by Felicia Day
by justin
Today I read a paper titled “Robust Supervisors for Intersection Collision Avoidance in the Presence of Uncontrolled Vehicles”
The abstract is:
We present the design and validation of a centralized controller, called a supervisor, for collision avoidance of multiple human-driven vehicles at a road intersection, considering measurement errors, unmodeled dynamics, and uncontrolled vehicles.
We design the supervisor to be least restrictive, that is, to minimize its interferences with human drivers.
This performance metric is given a precise mathematical form by splitting the design process into two subproblems: verification problem and supervisor-design problem.
The verification problem determines whether an input signal exists that makes controlled vehicles avoid collisions at all future times.
The supervisor is designed such that if the verification problem returns yes, it allows the drivers’ desired inputs; otherwise, it overrides controlled vehicles to prevent collisions.
As a result, we propose exact and efficient supervisors.
The exact supervisor solves the verification problem exactly but with combinatorial complexity.
In contrast, the efficient supervisor solves the verification problem within a quantified approximation bound in polynomially bounded time with the number of controlled vehicles.
We validate the performances of both supervisors through simulation and experimental testing.
by justin
Today I finished reading “Eragon” by Christopher Paolini
by justin
Today I read a paper titled “PolyDepth: Real-time Penetration Depth Computation using Iterative Contact-Space Projection”
The abstract is:
We present a real-time algorithm that finds the Penetration Depth (PD) between general polygonal models based on iterative and local optimization techniques.
Given an in-collision configuration of an object in configuration space, we find an initial collision-free configuration using several methods such as centroid difference, maximally clear configuration, motion coherence, random configuration, and sampling-based search.
We project this configuration on to a local contact space using a variant of continuous collision detection algorithm and construct a linear convex cone around the projected configuration.
We then formulate a new projection of the in-collision configuration onto the convex cone as a Linear Complementarity Problem (LCP), which we solve using a type of Gauss-Seidel iterative algorithm.
We repeat this procedure until a locally optimal PD is obtained.
Our algorithm can process complicated models consisting of tens of thousands triangles at interactive rates.
by justin
Today I read a paper titled “Preprint Virtual Reality Assistant Technology for Learning Primary Geography”
The abstract is:
This is the preprint version of our paper on ICWL2015.
A virtual reality based enhanced technology for learning primary geography is proposed, which synthesizes several latest information technologies including virtual reality(VR), 3D geographical information system(GIS), 3D visualization and multimodal human-computer-interaction (HCI).
The main functions of the proposed system are introduced, i.e.
Buffer analysis, Overlay analysis, Space convex hull calculation, Space convex decomposition, 3D topology analysis and 3D space intersection detection.
The multimodal technologies are employed in the system to enhance the immersive perception of the users.
by justin
This month I am studying “Illustrator one-on-one intermediate”
Upgrading my Illustrator skills for 2016 by learning the in’s and out’s of the new Illustrator CC.
by justin
Today I finished reading “Superconnect: Harnessing the Power of Networks and the Strength of Weak Links” by Richard Koch
by justin
Today I read a paper titled “Real-time correction of panoramic images using hyperbolic Möbius transformations”
The abstract is:
Wide-angle images gained a huge popularity in the last years due to the development of computational photography and imaging technological advances.
They present the information of a scene in a way which is more natural for the human eye but, on the other hand, they introduce artifacts such as bent lines.
These artifacts become more and more unnatural as the field of view increases.
In this work, we present a technique aimed to improve the perceptual quality of panorama visualization.
The main ingredients of our approach are, on one hand, considering the viewing sphere as a Riemann sphere, what makes natural the application of M\”obius (complex) transformations to the input image, and, on the other hand, a projection scheme which changes in function of the field of view used.
We also introduce an implementation of our method, compare it against images produced with other methods and show that the transformations can be done in real-time, which makes our technique very appealing for new settings, as well as for existing interactive panorama applications.
by justin
Today I finished reading “Realware” by Rudy Rucker
by justin
Today I read a paper titled “Self-propelled Chimeras”
The abstract is:
We report the appearance of chimera states in a minimal extension of the classical Vicsek model for collective motion of self-propelled particle systems.
Inspired by earlier works on chimera states in the Kuramoto model, we introduce a phase lag parameter in the particle alignment dynamics.
Compared to the oscillatory networks with fixed site positions, the self-propelled particle systems can give rise to distinct forms of chimeras resembling moving flocks through an incoherent surrounding, for which we characterize their parameter domains.
More specifically, we detect localized directional one-headed and multi-headed chimera states, as well as scattered directional chimeras without space localization.
We discuss canonical generalizations of the elementary Vicsek model and show chimera states for them indicating the universality of this novel behavior.
A continuum limit of the particle system is derived that preserves the chimeric behavior.
by justin
Today I finished reading “Complete Stories” by Rudy Rucker
by justin
Today I read a paper titled “Real-time 3D scene description using Spheres, Cones and Cylinders”
The abstract is:
The paper describes a novel real-time algorithm for finding 3D geometric primitives (cylinders, cones and spheres) from 3D range data.
In its core, it performs a fast model fitting with a model update in constant time (O(1)) for each new data point added to the model.
We use a three stage approach.The first step inspects 1.5D sub spaces, to find ellipses.
The next stage uses these ellipses as input by examining their neighborhood structure to form sets of candidates for the 3D geometric primitives.
Finally, candidate ellipses are fitted to the geometric primitives.
The complexity for point processing is O(n); additional time of lower order is needed for working on significantly smaller amount of mid-level objects.
This allows the approach to process 30 frames per second on Kinect depth data, which suggests this approach as a pre-processing step for 3D real-time higher level tasks in robotics, like tracking or feature based mapping.
by justin
Today I finished reading “The Purloined Paperweight” by P.G. Wodehouse
by justin
Today I read a paper titled “Lens Factory: Automatic Lens Generation Using Off-the-shelf Components”
The abstract is:
Custom optics is a necessity for many imaging applications.
Unfortunately, custom lens design is costly (thousands to tens of thousands of dollars), time consuming (10-12 weeks typical lead time), and requires specialized optics design expertise.
By using only inexpensive, off-the-shelf lens components the Lens Factory automatic design system greatly reduces cost and time.
Design, ordering of parts, delivery, and assembly can be completed in a few days, at a cost in the low hundreds of dollars.
Lens design constraints, such as focal length and field of view, are specified in terms familiar to the graphics community so no optics expertise is necessary.
Unlike conventional lens design systems, which only use continuous optimization methods, Lens Factory adds a discrete optimization stage.
This stage searches the combinatorial space of possible combinations of lens elements to find novel designs, evolving simple canonical lens designs into more complex, better designs.
Intelligent pruning rules make the combinatorial search feasible.
We have designed and built several high performance optical systems which demonstrate the practicality of the system.
by justin
Today I finished reading “The Long Utopia” by Terry Pratchett
by justin
This month I am studying “Illustrator one-on-one fundamentals”
Illustrator has been completely overhauled since I last took a class in it.
I have been using Illustrator for years, but I am sure there are hidden depths I have yet to explore.
Also, I didn’t have access to the advanced Photoshop class, otherwise I would have been studying that this month.
by justin
Today I finished reading “The Daleth Effect” by Harry Harrison
by justin
First, I shall paraphrase what every article written around this paper is stating: “Creativity peaks during our early 20’s and then again in our 50’s. But let’s focus on the early 20’s.”
Complete waste of time BBC “news” article with irrelevant and unrelated image attached here:
https://www.bbc.com/news/newsbeat-48077012
And the original Ohio State University study here, which you probably don’t want to waste your time reading either:
https://www.nber.org/papers/w11799.pdf
Abstract:
This paper studies life cycle creativity among Nobel laureate economists. We identify two distinct life cycles of scholarly creativity. Experimental innovators work inductively, accumulating knowledge from experience. Conceptual innovators work deductively, applying abstract principles. We find that conceptual innovators do their most important work earlier in their careers than experimental laureates. For instance, our estimates imply that the probability that the most conceptual laureate publishes his single best work peaks at age 25 compared to the mid-50s for the most experimental laureate. Thus while experience benefits experimental innovators, newness to a field benefits conceptual innovators.
Wow!
What an absolute steaming pile of bullshit filtered through the lens of shoddy journalism from a questionable, non-longitudinal study of a limited data set (31 non-participating subjects) that focused on a single data point (citations of a science paper) in a single field (economics) set up by two people who ranked (subjectively) the style of creativity someone demonstrates.
Interestingly, this quote: “…For the most conceptual laureate, the probability of a single best year peaks at age at age 24.8…” indicates a single data point of a single subject that can skew the story we are telling ourself (28.8 was the mean age for the first peak) which is just barely “in our 20’s”. Perhaps a rephrasing to “our late 20’s” might be better.
The results are inconclusive and the conclusion is so littered with “weasel words” like “could” and “may” I honestly thought I was reading a paper written by someone with commitment issues.
The paper also seems to be at odds with many of the papers it cites which state, quite clearly, that creativity is mid-30s to late-40s but also that “creativity” is not governed so much by age but by the absorption into the cultural mindset of the field and also where the person is in their career and their life.
There is a reason why theoretical mathematicians do their “best work” in their 30’s and multiple studies have found it has nothing to do with how creative they actually are.
There’s an awful lot of articles (none of which are linking to the original study but appear to be just parroting each other’s misconceptionw) written around this study, and everyone is throwing away forty years of psychological research in to how creativity works and its peaks and valleys, and quoting this paper as though it is the New Gospel and there are only two points in life where we are now creative. So we’re right back where we started with ageism and erroneously defining “creative peaks.”
This paper should be treated as what it is, another data point in how creativity works. We humans really need to stop the cycle of touting the latest paper as the final answer on a subject.
Creating and creativity to some, is like breathing, they cannot stop even if they wanted too.
by justin
Today I finished reading “The Wanderer” by Fritz Leiber
by justin
Today I read a paper titled “Merging of Bézier curves with box constraints”
The abstract is:
In this paper, we present a novel approach to the problem of merging of B\’ezier curves with respect to the $L_2$-norm.
We give illustrative examples to show that the solution of the conventional merging problem may not be suitable for further modification and applications.
As in the case of the degree reduction problem, we apply the so-called restricted area approach — proposed recently in (P.
Gospodarczyk, Computer-Aided Design 62 (2015), 143–151) — to avoid certain defects and make the resulting curve more useful.
A method of solving the new problem is based on box-constrained quadratic programming approach.
by justin
Today I finished reading “Stick to Drawing Comics, Monkey Brain!” by Scott Adams
by justin
Today I finished reading “The John Varley Reader” by John Varley
by justin
Today I read a paper titled “Efficient Hill-Climber for Multi-Objective Pseudo-Boolean Optimization”
The abstract is:
Local search algorithms and iterated local search algorithms are a basic technique.
Local search can be a stand along search methods, but it can also be hybridized with evolutionary algorithms.
Recently, it has been shown that it is possible to identify improving moves in Hamming neighborhoods for k-bounded pseudo-Boolean optimization problems in constant time.
This means that local search does not need to enumerate neighborhoods to find improving moves.
It also means that evolutionary algorithms do not need to use random mutation as a operator, except perhaps as a way to escape local optima.
In this paper, we show how improving moves can be identified in constant time for multiobjective problems that are expressed as k-bounded pseudo-Boolean functions.
In particular, multiobjective forms of NK Landscapes and Mk Landscapes are considered.
by justin
Today I finished reading “Ukridge” by P.G. Wodehouse
by justin
Today I read a paper titled “On Avoidance Learning with Partial Observability”
The abstract is:
We study a framework where agents have to avoid aversive signals.
The agents are given only partial information, in the form of features that are projections of task states.
Additionally, the agents have to cope with non-determinism, defined as unpredictability on the way that actions are executed.
The goal of each agent is to define its behavior based on feature-action pairs that reliably avoid aversive signals.
We study a learning algorithm, called A-learning, that exhibits fixpoint convergence, where the belief of the allowed feature-action pairs eventually becomes fixed.
A-learning is parameter-free and easy to implement.
by justin
Today I read a paper titled “Light Efficient Flutter Shutter”
The abstract is:
Flutter shutter is a technique in which the exposure is chopped into segments and light is only integrated part of the time.
By carefully selecting the chopping sequence it is possible to better condition the data for reconstruction problems such as motion deblurring, focal sweeping, and compressed sensing.
The partial exposure trades better conditioning for less energy.
In problems such as motion deblurring the available energy is what caused the problem in the first place (as strong illumination allows short exposure thus eliminates motion blur).
It is still beneficial because the benefit from the better conditioning outweighs the cost in energy.
This documents is focused on light efficient flutter shutter that provides better conditioning and better energy utilization than conventional flutter shutter.