Today I finished reading “Negima!: Magister Negi Magi #12” by Ken Akamatsu
Archives for 2013
Read – Soft Computing Applications and Intelligent Systems
Today I finished reading “Soft Computing Applications and Intelligent Systems: Second International Multi-Conference on Artificial Intelligence Technology, August 28-29, 2013. Proceedings” by Shahrul Azman Noah
Read – Barnaby Rudge
Today I finished reading “Barnaby Rudge” by Charles Dickens
Paper – Assessing the Value of 3D Reconstruction in Building Construction
Today I read a paper titled “Assessing the Value of 3D Reconstruction in Building Construction”
The abstract is:
3-dimensional (3D) reconstruction is an emerging field in image processing and computer vision that aims to create 3D visualizations/ models of objects/ scenes from image sets.
However, its commercial applications and benefits are yet to be fully explored.
In this paper, we describe ongoing work towards assessing the value of 3D reconstruction in the building construction domain.
We present preliminary results from a user study, where our objective is to understand the use of visual information in building construction in order to determine problems with the use of visual information and identify potential benefits and scenarios for the use of 3D reconstruction.
Read – Little Bets
Today I finished reading “Little Bets: How Breakthrough Ideas Emerge from Small Discoveries” by Peter Sims
Read – Negima!: Magister Negi Magi #11
Today I finished reading “Negima!: Magister Negi Magi #11” by Ken Akamatsu
Read – Negima!: Magister Negi Magi #10
Today I finished reading “Negima!: Magister Negi Magi #10” by Ken Akamatsu
Paper – Augmented reality usage for prototyping speed up
Today I read a paper titled “Augmented reality usage for prototyping speed up”
The abstract is:
The first part of the article describes our approach for solution of this problem by means of Augmented Reality.
The merging of the real world model and digital objects allows streamline the work with the model and speed up the whole production phase significantly.
The main advantage of augmented reality is the possibility of direct manipulation with the scene using a portable digital camera.
Also adding digital objects into the scene could be done using identification markers placed on the surface of the model.
Therefore it is not necessary to work with special input devices and lose the contact with the real world model.
Adjustments are done directly on the model.
The key problem of outlined solution is the ability of identification of an object within the camera picture and its replacement with the digital object.
The second part of the article is focused especially on the identification of exact position and orientation of the marker within the picture.
The identification marker is generalized into the triple of points which represents a general plane in space.
There is discussed the space identification of these points and the description of representation of their position and orientation be means of transformation matrix.
This matrix is used for rendering of the graphical objects (e.
g.
in OpenGL and Direct3D).
Read – Negima!: Magister Negi Magi #9
Today I finished reading “Negima!: Magister Negi Magi #9” by Ken Akamatsu
Read – The Startup Owner’s Manual Strategy Guide
Today I finished reading “The Startup Owner’s Manual Strategy Guide” by Steven Gary Blank
Paper – On-Board Visual Tracking with Unmanned Aircraft System (UAS)
Today I read a paper titled “On-Board Visual Tracking with Unmanned Aircraft System (UAS)”
The abstract is:
This paper presents the development of a real time tracking algorithm that runs on a 1.2 GHz PC/104 computer on-board a small UAV.
The algorithm uses zero mean normalized cross correlation to detect and locate an object in the image.
A kalman filter is used to make the tracking algorithm computationally efficient.
Object position in an image frame is predicted using the motion model and a search window, centered at the predicted position is generated.
Object position is updated with the measurement from object detection.
The detected position is sent to the motion controller to move the gimbal so that the object stays at the center of the image frame.
Detection and tracking is autonomously carried out on the payload computer and the system is able to work in two different methods.
The first method starts detecting and tracking using a stored image patch.
The second method allows the operator on the ground to select the interest object for the UAV to track.
The system is capable of re-detecting an object, in the event of tracking failure.
Performance of the tracking system was verified both in the lab and on the field by mounting the payload on a vehicle and simulating a flight.
Tests show that the system can detect and track a diverse set of objects in real time.
Flight testing of the system will be conducted at the next available opportunity.
Read – Negima!: Magister Negi Magi #8
Today I finished reading “Negima!: Magister Negi Magi #8” by Ken Akamatsu
Paper – Toward the Graphics Turing Scale on a Blue Gene Supercomputer
Today I read a paper titled “Toward the Graphics Turing Scale on a Blue Gene Supercomputer”
The abstract is:
We investigate raytracing performance that can be achieved on a class of Blue Gene supercomputers.
We measure a 822 times speedup over a Pentium IV on a 6144 processor Blue Gene/L.
We measure the computational performance as a function of number of processors and problem size to determine the scaling performance of the raytracing calculation on the Blue Gene.
We find nontrivial scaling behavior at large number of processors.
We discuss applications of this technology to scientific visualization with advanced lighting and high resolution.
We utilize three racks of a Blue Gene/L in our calculations which is less than three percent of the the capacity of the worlds largest Blue Gene computer.
Paper – The interplay of microscopic and mesoscopic structure in complex networks
Today I read a paper titled “The interplay of microscopic and mesoscopic structure in complex networks”
The abstract is:
Not all nodes in a network are created equal.
Differences and similarities exist at both individual node and group levels.
Disentangling single node from group properties is crucial for network modeling and structural inference.
Based on unbiased generative probabilistic exponential random graph models and employing distributive message passing techniques, we present an efficient algorithm that allows one to separate the contributions of individual nodes and groups of nodes to the network structure.
This leads to improved detection accuracy of latent class structure in real world data sets compared to models that focus on group structure alone.
Furthermore, the inclusion of hitherto neglected group specific effects in models used to assess the statistical significance of small subgraph (motif) distributions in networks may be sufficient to explain most of the observed statistics.
We show the predictive power of such generative models in forecasting putative gene-disease associations in the Online Mendelian Inheritance in Man (OMIM) database.
The approach is suitable for both directed and undirected uni-partite as well as for bipartite networks.
Paper – Supervised Random Walks: Predicting and Recommending Links in Social Networks
Today I read a paper titled “Supervised Random Walks: Predicting and Recommending Links in Social Networks”
The abstract is:
Predicting the occurrence of links is a fundamental problem in networks.
In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing.
Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open.
We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes.
We achieve this by using these attributes to guide a random walk on the graph.
We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future.
We develop an efficient training algorithm to directly learn the edge strength estimation function.
Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms state-of-the-art unsupervised approaches as well as approaches that are based on feature extraction.
.
Paper – Cycles of cooperation and defection in imperfect learning
Today I read a paper titled “Cycles of cooperation and defection in imperfect learning”
The abstract is:
When people play a repeated game they usually try to anticipate their opponents’ moves based on past observations, and then decide what action to take next.
Behavioural economics studies the mechanisms by which strategic decisions are taken in these adaptive learning processes.
We here investigate a model of learning the iterated prisoner’s dilemma game.
Players have the choice between three strategies, always defect (ALLD), always cooperate (ALLC) and tit-for-tat (TFT).
The only strict Nash equilibrium in this situation is ALLD.
When players learn to play this game convergence to the equilibrium is not guaranteed, for example we find cooperative behaviour if players discount observations in the distant past.
When agents use small samples of observed moves to estimate their opponent’s strategy the learning process is stochastic, and sustained oscillations between cooperation and defection can emerge.
These cycles are similar to those found in stochastic evolutionary processes, but the origin of the noise sustaining the oscillations is different and lies in the imperfect sampling of the opponent’s strategy.
Based on a systematic expansion technique, we are able to predict the properties of these learning cycles, providing an analytical tool with which the outcome of more general stochastic adaptation processes can be characterised.
Listening – R.A.P. Music
This week I am listening to “R.A.P. Music” by Killer Mike
Read – Negima!: Magister Negi Magi #7
Today I finished reading “Negima!: Magister Negi Magi #7” by Ken Akamatsu
Read – Happy Days
Today I finished reading “Happy Days” by Samuel Beckett
Read – Negima!: Magister Negi Magi #6
Today I finished reading “Negima!: Magister Negi Magi #6” by Ken Akamatsu
Read – Negima!: Magister Negi Magi #5
Today I finished reading “Negima!: Magister Negi Magi #5” by Ken Akamatsu
Paper – Augmented Reality in ICT for Minimum Knowledge Loss
Today I read a paper titled “Augmented Reality in ICT for Minimum Knowledge Loss”
The abstract is:
Informatics world digitizes the human beings, with the contribution made by all the industrial people.
In the recent survey it is proved that people are not accustomed or they are not able to access the electronic devices to its extreme usage.
Also people are more dependent to the technologies and their day-to-day activities are ruled by the same.
In this paper we discuss on one of the advanced technology which will soon rule the world and make the people are more creative and at the same time hassle-free.
This concept is introduced as 6th sense technology by an IIT, Mumbai student who is presently Ph.D., scholar in MIT, USA.
Similar to this research there is one more research going on under the title Augmented Reality.
This research makes a new association with the real world to digital world and allows us to share and manipulate the information directly with our mental thoughts.
A college which implements state of the art technology for teaching and learning, Higher College of Technology, Muscat, (HCT) tries to identify the opportunities and limitations of implementing this augmented reality for teaching and learning.
The research team of HCT, here, tries to give two scenarios in which augmented reality can fit in.
Since this research is in the conceptual level we are trying to illustrate the history of this technology and how it can be adopted in the teaching environment .
Read – Negima!: Magister Negi Magi #4
Today I finished reading “Negima!: Magister Negi Magi #4” by Ken Akamatsu
Paper – 3D Geological Modeling and Visualization of Rock Masses Based on Google Earth: A Case Study
Today I read a paper titled “3D Geological Modeling and Visualization of Rock Masses Based on Google Earth: A Case Study”
The abstract is:
Google Earth (GE) has become a powerful tool for geological modeling and visualization.
An interesting and useful feature of GE, Google Street View, can allow the GE users to view geological structure such as layers of rock masses at a field site.
In this paper, we introduce a practical solution for building 3D geological models for rock masses based on the data acquired by use with GE.
A real study case at Haut-Barr, France is presented to demonstrate our solution.
We first locate the position of Haut-Barr in GE, and then determine the shape and scale of the rock masses in the study area, and thirdly acquire the layout of layers of rock masses in the Google Street View, and finally create the approximate 3D geological models by extruding and intersecting.
The generated 3D geological models can simply reflect the basic structure of the rock masses at Haut-Barr, and can be used for visualizing the rock bodies interactively.
Read – Negima! Magister Negi Magi #3
Today I finished reading “Negima! Magister Negi Magi #3” by Ken Akamatsu
Read – The Count of Monte Cristo
Today I finished reading “The Count of Monte Cristo” by Alexandre Dumas
Read – Conan the Destroyer
Today I finished reading “Conan the Destroyer” by Robert Howard
Paper – Magnetic measurements and kinetic energy of the superconducting condensate in SmBa_2Cu_3O_{7-δ}
Today I read a paper titled “Magnetic measurements and kinetic energy of the superconducting condensate in SmBa_2Cu_3O_{7-δ}”
The abstract is:
We report in-field kinetic energy results in the temperature region closely below the transition temperature of two differently prepared polycrystalline samples of the superconducting cuprate SmBa$_{\text{2}}$Cu$_{\text{3}}$O$_{7-\delta}$.
The kinetic energy was determined from magnetization measurements performed above the irreversibility line defined by the splitting between the curves obtained according the ZFC and FC prescriptions.
The results are analyzed in the intermediate field regime where the London approximation can be used for describing the magnetization.
From the analysis, estimations were carried out for the penetration depth and the upper critical field of the studied samples.The difference between the kinectic energy magnitudes for the two studied samples is ascribed to effects from granularity.
Read – Negima! Magister Negi Magi #2
Today I finished reading “Negima! Magister Negi Magi #2” by Ken Akamatsu
Studying – Studio photography
This month I am studying “Studio photography”
Two separate two-day workshops with a local photography studio.
Going back to my photography “roots” and getting back behind a camera for a month to try my hand at photography in the studio.
Not really my roots, because my roots are in electronics and software but photography has been one of those subjects that pops up every now and then in my life.
Update: Between the two workshops (four days total) and then just “getting out there and doing stuff” I logged 43 hours of supervised study and practice.
Read – Negima! Magister Negi Magi #1
Today I finished reading “Negima! Magister Negi Magi #1” by Ken Akamatsu
Listening – El Objeto Antes Llamado Disco
This week I am listening to “El Objeto Antes Llamado Disco” by Café Tacuba
Paper – Chameleon: A Color-Adaptive Web Browser for Mobile OLED Displays
Today I read a paper titled “Chameleon: A Color-Adaptive Web Browser for Mobile OLED Displays”
The abstract is:
Displays based on organic light-emitting diode (OLED) technology are appearing on many mobile devices.
Unlike liquid crystal displays (LCD), OLED displays consume dramatically different power for showing different colors.
In particular, OLED displays are inefficient for showing bright colors.
This has made them undesirable for mobile devices because much of the web content is of bright colors.
To tackle this problem, we present the motivational studies, design, and realization of Chameleon, a color adaptive web browser that renders web pages with power-optimized color schemes under user-supplied constraints.
Driven by the findings from our motivational studies, Chameleon provides end users with important options, offloads tasks that are not absolutely needed in real-time, and accomplishes real-time tasks by carefully enhancing the codebase of a browser engine.
According to measure-ments with OLED smartphones, Chameleon is able to re-duce average system power consumption for web browsing by 41% and reduce display power consumption by 64% without introducing any noticeable delay.
Paper – Optimal Multi-Robot Path Planning with LTL Constraints: Guaranteeing Correctness Through Synchronization
Today I read a paper titled “Optimal Multi-Robot Path Planning with LTL Constraints: Guaranteeing Correctness Through Synchronization”
The abstract is:
In this paper, we consider the automated planning of optimal paths for a robotic team satisfying a high level mission specification.
Each robot in the team is modeled as a weighted transition system where the weights have associated deviation values that capture the non-determinism in the traveling times of the robot during its deployment.
The mission is given as a Linear Temporal Logic (LTL) formula over a set of propositions satisfied at the regions of the environment.
Additionally, we have an optimizing proposition capturing some particular task that must be repeatedly completed by the team.
The goal is to minimize the maximum time between successive satisfying instances of the optimizing proposition while guaranteeing that the mission is satisfied even under non-deterministic traveling times.
Our method relies on the communication capabilities of the robots to guarantee correctness and maintain performance during deployment.
After computing a set of optimal satisfying paths for the members of the team, we also compute a set of synchronization sequences for each robot to ensure that the LTL formula is never violated during deployment.
We implement and experimentally evaluate our method considering a persistent monitoring task in a road network environment.
Paper – The Role of Computer Graphics in Documentary Film Production
Today I read a paper titled “The Role of Computer Graphics in Documentary Film Production”
The abstract is:
We discuss a topic on the role of computer graphics in the production of documentaries, which is often ignored in favor of other topics.
Typically, except for some rare occasions, documentary producers and computer scientists or digital artists that do computer graphics are relatively far apart in their domains and rarely intercommunicate to have a joint production; yet it happens, and perhaps more so in the present and the future.
We attempt to classify the documentaries on the amount and techniques of computer graphics used for documentaries.
We come up with the initial categories such as “plain” (no graphics), “in-between”, “all-out” — nearly 100% of the documentary consisting of computer-generated imagery.
Computer graphics can be used to enhance the scenery, fill in the gaps in the missing storyline pieces, or animate between scenes.
It can incorporate stereoscopic effects for higher viewer impression as well as interactivity aspects.
It can also be used simply in old archived image and film restoration.
Read – Yotsuba&! #02
Today I finished reading “Yotsuba&! #02” by Kiyohiko Azuma
Listening – Oshin
This week I am listening to “Oshin” by DIIV
Paper – Cognitive Memory Network
Today I read a paper titled “Cognitive Memory Network”
The abstract is:
A resistive memory network that has no crossover wiring is proposed to overcome the hardware limitations to size and functional complexity that is associated with conventional analogue neural networks.
The proposed memory network is based on simple network cells that are arranged in a hierarchical modular architecture.
Cognitive functionality of this network is demonstrated by an example of character recognition.
The network is trained by an evolutionary process to completely recognise characters deformed by random noise, rotation, scaling and shifting .
Paper – Good Friends, Bad News – Affect and Virality in Twitter
Today I read a paper titled “Good Friends, Bad News – Affect and Virality in Twitter”
The abstract is:
The link between affect, defined as the capacity for sentimental arousal on the part of a message, and virality, defined as the probability that it be sent along, is of significant theoretical and practical importance, e.g.
for viral marketing.
A quantitative study of emailing of articles from the NY Times finds a strong link between positive affect and virality, and, based on psychological theories it is concluded that this relation is universally valid.
The conclusion appears to be in contrast with classic theory of diffusion in news media emphasizing negative affect as promoting propagation.
In this paper we explore the apparent paradox in a quantitative analysis of information diffusion on Twitter.
Twitter is interesting in this context as it has been shown to present both the characteristics social and news media.
The basic measure of virality in Twitter is the probability of retweet.
Twitter is different from email in that retweeting does not depend on pre-existing social relations, but often occur among strangers, thus in this respect Twitter may be more similar to traditional news media.
We therefore hypothesize that negative news content is more likely to be retweeted, while for non-news tweets positive sentiments support virality.
To test the hypothesis we analyze three corpora: A complete sample of tweets about the COP15 climate summit, a random sample of tweets, and a general text corpus including news.
The latter allows us to train a classifier that can distinguish tweets that carry news and non-news information.
We present evidence that negative sentiment enhances virality in the news segment, but not in the non-news segment.
We conclude that the relation between affect and virality is more complex than expected based on the findings of Berger and Milkman (2010), in short ‘if you want to be cited: Sweet talk your friends or serve bad news to the public’.
Paper – Slime mould computes planar shapes
Today I read a paper titled “Slime mould computes planar shapes”
The abstract is:
Computing a polygon defining a set of planar points is a classical problem of modern computational geometry.
In laboratory experiments we demonstrate that a concave hull, a connected alpha-shape without holes, of a finite planar set is approximated by slime mould Physarum polycephalum.
We represent planar points with sources of long-distance attractants and short-distance repellents and inoculate a piece of plasmodium outside the data set.
The plasmodium moves towards the data and envelops it by pronounced protoplasmic tubes.
Read – Yotsuba&! #06
Today I finished reading “Yotsuba&! #06” by Kiyohiko Azuma
Read – The Complete Book of Perfect Phrases Book for Effective Managers
Today I finished reading “The Complete Book of Perfect Phrases Book for Effective Managers” by Douglas Max
Listening – Beard, Wives, Denim
This week I am listening to “Beard, Wives, Denim” by Pond
Paper – Scientific Visualization in Astronomy: Towards the Petascale Astronomy Era
Today I read a paper titled “Scientific Visualization in Astronomy: Towards the Petascale Astronomy Era”
The abstract is:
Astronomy is entering a new era of discovery, coincident with the establishment of new facilities for observation and simulation that will routinely generate petabytes of data.
While an increasing reliance on automated data analysis is anticipated, a critical role will remain for visualization-based knowledge discovery.
We have investigated scientific visualization applications in astronomy through an examination of the literature published during the last two decades.
We identify the two most active fields for progress – visualization of large-N particle data and spectral data cubes – discuss open areas of research, and introduce a mapping between astronomical sources of data and data representations used in general purpose visualization tools.
We discuss contributions using high performance computing architectures (e.g: distributed processing and GPUs), collaborative astronomy visualization, the use of workflow systems to store metadata about visualization parameters, and the use of advanced interaction devices.
We examine a number of issues that may be limiting the spread of scientific visualization research in astronomy and identify six grand challenges for scientific visualization research in the Petascale Astronomy Era.
Paper – Intelligent Car System
Today I read a paper titled “Intelligent Car System”
The abstract is:
In modern life the road safety has becomes the core issue.
One single move of a driver can cause horrifying accident.
The main goal of intelligent car system is to make communication with other cars on the road.
The system is able to control to speed, direction and the distance between the cars the intelligent car system is able to recognize traffic light and is able to take decision according to it.
This paper presents a framework of the intelligent car system.
I validate several aspect of our system using simulation.
Listening – Psychedelic Pill
This week I am listening to “Psychedelic Pill” by Neil Young And Crazy Horse
Paper – Skeletal Representations and Applications
Today I read a paper titled “Skeletal Representations and Applications”
The abstract is:
When representing a solid object there are alternatives to the use of traditional explicit (surface meshes) or implicit (zero crossing of implicit functions) methods.
Skeletal representations encode shape information in a mixed fashion: they are composed of a set of explicit primitives, yet they are able to efficiently encode the shape’s volume as well as its topology.
I will discuss, in two dimensions, how symmetry can be used to reduce the dimensionality of the data (from a 2D solid to a 1D curve), and how this relates to the classical definition of skeletons by Medial Axis Transform.
While the medial axis of a 2D shape is composed of a set of curves, in 3D it results in a set of sheets connected in a complex fashion.
Because of this complexity, medial skeletons are difficult to use in practical applications.
Curve skeletons address this problem by strictly requiring their geometry to be one dimensional, resulting in an intuitive yet powerful shape representation.
In this report I will define both medial and curve skeletons and discuss their mutual relationship.
I will also present several algorithms for their computation and a variety of scenarios where skeletons are employed, with a special focus on geometry processing and shape analysis.
Studying – Creating responsive SVGS in Illustrator and CSS
This month I am studying “Creating responsive SVGS in Illustrator and CSS”
Read – The Dreaming #3
Today I finished reading “The Dreaming #3” by Queenie Chan
Listening – Pacifica
This week I am listening to “Pacifica” by The Presets