This month I am studying “Autodesk Maya scripting with MEL”
Somebody needs to think about this stuff...
by justin
This month I am studying “Autodesk Maya scripting with MEL”
by justin
This week I am listening to “To Lose My Life” by White Lies
by justin
Today I read a paper titled “Region-based active contour with noise and shape priors”
The abstract is:
In this paper, we propose to combine formally noise and shape priors in region-based active contours.
On the one hand, we use the general framework of exponential family as a prior model for noise.
On the other hand, translation and scale invariant Legendre moments are considered to incorporate the shape prior (e.g.
fidelity to a reference shape).
The combination of the two prior terms in the active contour functional yields the final evolution equation whose evolution speed is rigorously derived using shape derivative tools.
Experimental results on both synthetic images and real life cardiac echography data clearly demonstrate the robustness to initialization and noise, flexibility and large potential applicability of our segmentation algorithm.
by justin
Today I read a paper titled “Generalized Kernel-based Visual Tracking”
The abstract is:
In this work we generalize the plain MS trackers and attempt to overcome standard mean shift trackers’ two limitations.
It is well known that modeling and maintaining a representation of a target object is an important component of a successful visual tracker.
However, little work has been done on building a robust template model for kernel-based MS tracking.
In contrast to building a template from a single frame, we train a robust object representation model from a large amount of data.
Tracking is viewed as a binary classification problem, and a discriminative classification rule is learned to distinguish between the object and background.
We adopt a support vector machine (SVM) for training.
The tracker is then implemented by maximizing the classification score.
An iterative optimization scheme very similar to MS is derived for this purpose.
by justin
Today I finished reading “The Feynman Lectures on Physics Vol 12” by Richard Feynman
by justin
Today I finished reading “The Adventure of Charles Augustus Milverton” by Arthur Conan Doyle
by justin
This week I am listening to “The Hazards Of Love” by The Decemberists
by justin
Today I finished reading “The Law of Success, Volume II: Principles of Personal Power” by Napoleon Hill
by justin
Today I finished reading “The Accidental Time Machine” by Joe Haldeman
by justin
Today I read a paper titled “Variations of the Turing Test in the Age of Internet and Virtual Reality”
The abstract is:
Inspired by Hofstadter’s Coffee-House Conversation (1982) and by the science fiction short story SAM by Schattschneider (1988), we propose and discuss criteria for non-mechanical intelligence.
Firstly, we emphasize the practical need for such tests in view of massively multiuser online role-playing games (MMORPGs) and virtual reality systems like Second Life.
Secondly, we demonstrate Second Life as a useful framework for implementing (some iterations of) that test.
by justin
This week I am listening to “Tonight: Franz Ferdinand” by Franz Ferdinand
by justin
Today I read a paper titled “Perfect Hashing for Data Management Applications”
The abstract is:
Perfect hash functions can potentially be used to compress data in connection with a variety of data management tasks.
Though there has been considerable work on how to construct good perfect hash functions, there is a gap between theory and practice among all previous methods on minimal perfect hashing.
On one side, there are good theoretical results without experimentally proven practicality for large key sets.
On the other side, there are the theoretically analyzed time and space usage algorithms that assume that truly random hash functions are available for free, which is an unrealistic assumption.
In this paper we attempt to bridge this gap between theory and practice, using a number of techniques from the literature to obtain a novel scheme that is theoretically well-understood and at the same time achieves an order-of-magnitude increase in performance compared to previous “practical” methods.
This improvement comes from a combination of a novel, theoretically optimal perfect hashing scheme that greatly simplifies previous methods, and the fact that our algorithm is designed to make good use of the memory hierarchy.
We demonstrate the scalability of our algorithm by considering a set of over one billion URLs from the World Wide Web of average length 64, for which we construct a minimal perfect hash function on a commodity PC in a little more than 1 hour.
Our scheme produces minimal perfect hash functions using slightly more than 3 bits per key.
For perfect hash functions in the range $\{0,…,2n-1\}$ the space usage drops to just over 2 bits per key (i.e., one bit more than optimal for representing the key).
This is significantly below of what has been achieved previously for very large values of $n$.
by justin
This week I am listening to “Wilco (The Album)” by Wilco
by justin
Today I finished reading “Coders at Work: Reflections on the Craft of Programming” by Peter Seibel
by justin
Today I read a paper titled “Integration of navigation and action selection functionalities in a computational model of cortico-basal ganglia-thalamo-cortical loops”
The abstract is:
This article describes a biomimetic control architecture affording an animat both action selection and navigation functionalities.
It satisfies the survival constraint of an artificial metabolism and supports several complementary navigation strategies.
It builds upon an action selection model based on the basal ganglia of the vertebrate brain, using two interconnected cortico-basal ganglia-thalamo-cortical loops: a ventral one concerned with appetitive actions and a dorsal one dedicated to consummatory actions.
The performances of the resulting model are evaluated in simulation.
The experiments assess the prolonged survival permitted by the use of high level navigation strategies and the complementarity of navigation strategies in dynamic environments.
The correctness of the behavioral choices in situations of antagonistic or synergetic internal states are also tested.
Finally, the modelling choices are discussed with regard to their biomimetic plausibility, while the experimental results are estimated in terms of animat adaptivity.
by justin
Today I finished reading “What the Dog Saw and Other Adventures” by Malcolm Gladwell
by justin
Today I read a paper titled “Face Detection Using Adaboosted SVM-Based Component Classifier”
The abstract is:
Recently, Adaboost has been widely used to improve the accuracy of any given learning algorithm.
In this paper we focus on designing an algorithm to employ combination of Adaboost with Support Vector Machine as weak component classifiers to be used in Face Detection Task.
To obtain a set of effective SVM-weaklearner Classifier, this algorithm adaptively adjusts the kernel parameter in SVM instead of using a fixed one.
Proposed combination outperforms in generalization in comparison with SVM on imbalanced classification problem.
The proposed here method is compared, in terms of classification accuracy, to other commonly used Adaboost methods, such as Decision Trees and Neural Networks, on CMU+MIT face database.
Results indicate that the performance of the proposed method is overall superior to previous Adaboost approaches.
by justin
Today I finished reading “The Mating Season” by P.G. Wodehouse
by justin
This week I am listening to “The Eternal” by Sonic Youth
by justin
Today I read a paper titled “Teaching Physics Using Virtual Reality”
The abstract is:
We present an investigation of game-like simulations for physics teaching.
We report on the effectiveness of the interactive simulation “Real Time Relativity” for learning special relativity.
We argue that the simulation not only enhances traditional learning, but also enables new types of learning that challenge the traditional curriculum.
The lessons drawn from this work are being applied to the development of a simulation for enhancing the learning of quantum mechanics.
by justin
This month I am studying “Autodesk Maya”
Refresher course in the latest version of Maya
by justin
This week I am listening to “Forget And Not Slow Down” by Relient K
by justin
Today I read a paper titled “Robust Global Localization Using Clustered Particle Filtering”
The abstract is:
Global mobile robot localization is the problem of determining a robot’s pose in an environment, using sensor data, when the starting position is unknown.
A family of probabilistic algorithms known as Monte Carlo Localization (MCL) is currently among the most popular methods for solving this problem.
MCL algorithms represent a robot’s belief by a set of weighted samples, which approximate the posterior probability of where the robot is located by using a Bayesian formulation of the localization problem.
This article presents an extension to the MCL algorithm, which addresses its problems when localizing in highly symmetrical environments; a situation where MCL is often unable to correctly track equally probable poses for the robot.
The problem arises from the fact that sample sets in MCL often become impoverished, when samples are generated according to their posterior likelihood.
Our approach incorporates the idea of clusters of samples and modifies the proposal distribution considering the probability mass of those clusters.
Experimental results are presented that show that this new extension to the MCL algorithm successfully localizes in symmetric environments where ordinary MCL often fails.
by justin
Today I read a paper titled “Asynchronous Remote Medical Consultation for Ghana”
The abstract is:
Computer-mediated communication systems can be used to bridge the gap between doctors in underserved regions with local shortages of medical expertise and medical specialists worldwide.
To this end, we describe the design of a prototype remote consultation system intended to provide the social, institutional and infrastructural context for sustained, self-organizing growth of a globally-distributed Ghanaian medical community.
The design is grounded in an iterative design process that included two rounds of extended design fieldwork throughout Ghana and draws on three key design principles (social networks as a framework on which to build incentives within a self-organizing network; optional and incremental integration with existing referral mechanisms; and a weakly-connected, distributed architecture that allows for a highly interactive, responsive system despite failures in connectivity).
We discuss initial experiences from an ongoing trial deployment in southern Ghana.
by justin
This week I am listening to “Music For Men” by The Gossip
by justin
Today I read a paper titled “Modeling Epidemic Spread in Synthetic Populations – Virtual Plagues in Massively Multiplayer Online Games”
The abstract is:
A virtual plague is a process in which a behavior-affecting property spreads among characters in a Massively Multiplayer Online Game (MMOG).
The MMOG individuals constitute a synthetic population, and the game can be seen as a form of interactive executable model for studying disease spread, albeit of a very special kind.
To a game developer maintaining an MMOG, recognizing, monitoring, and ultimately controlling a virtual plague is important, regardless of how it was initiated.
The prospect of using tools, methods and theory from the field of epidemiology to do this seems natural and appealing.
We will address the feasibility of such a prospect, first by considering some basic measures used in epidemiology, then by pointing out the differences between real world epidemics and virtual plagues.
We also suggest directions for MMOG developer control through epidemiological modeling.
Our aim is understanding the properties of virtual plagues, rather than trying to eliminate them or mitigate their effects, as would be in the case of real infectious disease.
by justin
Today I finished reading “iCon: Steve Jobs, the Greatest Second Act in the History of Business” by Jeffrey S. Young
by justin
Today I read a paper titled “Efficient Binary and Run Length Morphology and its Application to Document Image Processing”
The abstract is:
This paper describes the implementation and evaluation of an open source library for mathematical morphology based on packed binary and run-length compressed images for document imaging applications.
Abstractions and patterns useful in the implementation of the interval operations are described.
A number of benchmarks and comparisons to bit-blit based implementations on standard document images are provided.
by justin
Today I finished reading “iPhone Game Development” by Chris Craft
by justin
Today I finished reading “Y: The Last Man #9: Motherland” by Brian K. Vaughan
by justin
Today I finished reading “Y: The Last Man #8: Kimono Dragons” by Brian K. Vaughan
by justin
This week I am listening to “Mean Everything To Nothing” by Manchester Orchestra
by justin
I judge people by their supposed service animals.
If you have a service animal in a jacket, with lots of patches, you are declaring to the world, service animal.
But I notice that the louder people declare something, the less true it usually is.
by justin
Today I finished reading “Y: The Last Man #7: Paper Dolls” by Brian K. Vaughan
by justin
How to spot a “fake service dog?”
Watch the handler.
If the person handling the service dog isn’t treating the dog like a part of them, it is a sure bet that its not a service dog, though not always guaranteed.
by justin
Today I finished reading “Perfect Phrases for Letters of Recommendation” by Paul Bodine
by justin
Today I finished reading “Y: The Last Man #10: Whys and Wherefores” by Brian K. Vaughan
by justin
Today I finished reading “The Android’s Dream” by John Scalzi
by justin
Today I read a paper titled “Olfactory search at high Reynolds number”
The abstract is:
Locating the source of odor in a turbulent environment – a common behavior for living organisms – is non-trivial because of the random nature of mixing.
Here we analyze the statistical physics aspects of the problem and propose an efficient strategy for olfactory search which can work in turbulent plumes.
The algorithm combines the maximum likelihood inference of the source position with an active search.
Our approach provides the theoretical basis for the design of olfactory robots and the quantitative tools for the analysis of the observed olfactory search behavior of living creatures (e.g.
odor modulated optomotor anemotaxis of moth) .
by justin
Today I finished reading “Y: The Last Man #6: Girl on Girl” by Brian K. Vaughan
by justin
Today I finished reading “Mariposa” by Greg Bear
by justin
This week I am listening to “Up From Below” by Edward Sharpe & The Magnetic Zeros
by justin
Today I read a paper titled “Recognition of expression variant faces using masked log-Gabor features and Principal Component Analysis”
The abstract is:
In this article we propose a method for the recognition of faces with different facial expressions.
For recognition we extract feature vectors by using log-Gabor filters of multiple orientations and scales.
Using sliding window algorithm and variances -based masking these features are extracted at image regions that are less affected by the changes of facial expressions.
Extracted features are passed to the Principal Component Analysis (PCA) -based recognition method.
The results of face recognition experiments using expression variant faces showed that the proposed method could achieve higher recognition accuracy than many other methods.
For development and testing we used facial images from the AR and FERET databases.
Using facial photographs of more than one thousand persons from the FERET database the proposed method achieved 96.6-98.9% first one recognition rate and 0.2-0.6% Equal Error Rate (EER).
by justin
Today I finished reading “Y: The Last Man #5: Ring of Truth” by Brian K. Vaughan
by justin
Today I read a paper titled “How to Beat the Adaptive Multi-Armed Bandit”
The abstract is:
The multi-armed bandit is a concise model for the problem of iterated decision-making under uncertainty.
In each round, a gambler must pull one of $K$ arms of a slot machine, without any foreknowledge of their payouts, except that they are uniformly bounded.
A standard objective is to minimize the gambler’s regret, defined as the gambler’s total payout minus the largest payout which would have been achieved by any fixed arm, in hindsight.
Note that the gambler is only told the payout for the arm actually chosen, not for the unchosen arms.
Almost all previous work on this problem assumed the payouts to be non-adaptive, in the sense that the distribution of the payout of arm $j$ in round $i$ is completely independent of the choices made by the gambler on rounds $1, \dots, i-1$.
In the more general model of adaptive payouts, the payouts in round $i$ may depend arbitrarily on the history of past choices made by the algorithm.
We present a new algorithm for this problem, and prove nearly optimal guarantees for the regret against both non-adaptive and adaptive adversaries.
After $T$ rounds, our algorithm has regret $O(\sqrt{T})$ with high probability (the tail probability decays exponentially).
This dependence on $T$ is best possible, and matches that of the full-information version of the problem, in which the gambler is told the payouts for all $K$ arms after each round.
Previously, even for non-adaptive payouts, the best high-probability bounds known were $O(T^{2/3})$, due to Auer, Cesa-Bianchi, Freund and Schapire.
The expected regret of their algorithm is $O(T^{1/2}) for non-adaptive payouts, but as we show, $\Omega(T^{2/3})$ for adaptive payouts.
by justin
Today I finished reading “Y: The Last Man #4: Safeword” by Brian K. Vaughan
by justin
Today I finished reading “Y: The Last Man #3: One Small Step” by Brian K. Vaughan
by justin
Today I finished reading “How Right You Are, Jeeves” by P.G. Wodehouse
by justin
Today I finished reading “Y: The Last Man #2: Cycles” by Brian K. Vaughan
by justin
This month I am studying “Personal branding basics”