No pedant (worth his moniker) ever said “I tried but I didn’t feel like commenting.”
Paper – Motif Analysis in the Amazon Product Co-Purchasing Network
Today I read a paper titled “Motif Analysis in the Amazon Product Co-Purchasing Network”
The abstract is:
Online stores like Amazon and Ebay are growing by the day.
Fewer people go to departmental stores as opposed to the convenience of purchasing from stores online.
These stores may employ a number of techniques to advertise and recommend the appropriate product to the appropriate buyer profile.
This article evaluates various 3-node and 4-node motifs occurring in such networks.
Community structures are evaluated too.These results may provide interesting insights into user behavior and a better understanding of marketing techniques.
Read – Pro C# 5.0 and the .Net 4.5 Framework
Today I finished reading “Pro C# 5.0 and the .Net 4.5 Framework” by Andrew Troelsen
Paper – Text/Graphics Separation and Skew Correction of Text Regions of Business Card Images for Mobile Devices
Today I read a paper titled “Text/Graphics Separation and Skew Correction of Text Regions of Business Card Images for Mobile Devices”
The abstract is:
Separation of the text regions from background texture and graphics is an important step of any optical character recognition system for the images containing both texts and graphics.
In this paper, we have presented a novel text/graphics separation technique and a method for skew correction of text regions extracted from business card images captured with a cell-phone camera.
At first, the background is eliminated at a coarse level based on intensity variance.
This makes the foreground components distinct from each other.
Then the non-text components are removed using various characteristic features of text and graphics.
Finally, the text regions are skew corrected for further processing.
Experimenting with business card images of various resolutions, we have found an optimum performance of 98.25% (recall) with 0.75 MP images, that takes 0.17 seconds processing time and 1.1 MB peak memory on a moderately powerful computer (DualCore 1.73 GHz Processor, 1 GB RAM, 1 MB L2 Cache).
The developed technique is computationally efficient and consumes low memory so as to be applicable on mobile devices.
Listening – Sweet Heart Sweet Light
This week I am listening to “Sweet Heart Sweet Light” by Spiritualized
Paper – Local Space-Time Smoothing for Version Controlled Documents
Today I read a paper titled “Local Space-Time Smoothing for Version Controlled Documents”
The abstract is:
Unlike static documents, version controlled documents are continuously edited by one or more authors.
Such collaborative revision process makes traditional modeling and visualization techniques inappropriate.
In this paper we propose a new representation based on local space-time smoothing that captures important revision patterns.
We demonstrate the applicability of our framework using experiments on synthetic and real-world data.
Paper – Improved visualisation of brain arteriovenous malformations using color intensity projections with hue cycling
Today I read a paper titled “Improved visualisation of brain arteriovenous malformations using color intensity projections with hue cycling”
The abstract is:
Color intensity projections (CIPs) have been shown to improve the visualisation of greyscale angiography images by combining greyscale images into a single color image.
A key property of the combined CIPs image is the encoding of the arrival time information from greyscale images into the hue of the color in the CIPs image.
A few minor improvements to the calculation of the CIPs image are introduced that substantially improve the quality of the visualisation.
One improvement is interpolating of the greyscale images in time before calculation of the CIPs image.
A second is the use of hue cycling – where the hue of the color is cycled through more than once in an image.
The hue cycling allows the variation of the hue to be concentrated in structures of interest.
An angiogram of a brain is used to demonstrate the substantial improvements hue cycling brings to CIPs images.
A third improvement is the use of maximum intensity projection for 2D rendering of a 3D CIPs image volume.
A fourth improvement allows interpreters to interactively adjust the phase of the hue via standard contrast-brightness controls using lookup tables.
Other potential applications of CIPs are also mentioned.
No longer free
You can always tell when a software as a service company is in trouble by how much they take away previously free features and start making them paid for features.
Read – A Canticle for Leibowitz
Today I finished reading “A Canticle for Leibowitz” by Walter M. Miller Jr.
Paper – Side-channel attack on labeling CAPTCHAs
Today I read a paper titled “Side-channel attack on labeling CAPTCHAs”
The abstract is:
We propose a new scheme of attack on the Microsoft’s ASIRRA CAPTCHA which represents a significant shortcut to the intended attacking path, as it is not based in any advance in the state of the art on the field of image recognition.
After studying the ASIRRA Public Corpus, we conclude that the security margin as stated by their authors seems to be quite optimistic.
Then, we analyze which of the studied parameters for the image files seems to disclose the most valuable information for helping in correct classification, arriving at a surprising discovery.
This represents a completely new approach to breaking CAPTCHAs that can be applied to many of the currently proposed image-labeling algorithms, and to prove this point we show how to use the very same approach against the HumanAuth CAPTCHA.
Lastly, we investigate some measures that could be used to secure the ASIRRA and HumanAuth schemes, but conclude no easy solutions are at hand.
Read – Benito Cereno
Today I finished reading “Benito Cereno” by Herman Melville
Studying – Introduction to InDesign
This month I am studying “Introduction to InDesign”
InDesign is one of those packages I hardly ever use so I sort of fumble around more than be productive. I am hoping this course will correct that.
Update: I would say I have a working familiarity with InDesign now. It is not a particularly complex package to figure out. I doubt I will have much use for the knowledge going forward unless I plan on creating my own magazine.
Log update: 27 hours of study and practice.
Listening – Halcyon
This week I am listening to “Halcyon” by Ellie Goulding
Paper – Text/Graphics Separation for Business Card Images for Mobile Devices
Today I read a paper titled “Text/Graphics Separation for Business Card Images for Mobile Devices”
The abstract is:
Separation of the text regions from background texture and graphics is an important step of any optical character recognition sytem for the images containg both texts and graphics.
In this paper, we have presented a novel text/graphics separation technique for business card images captured with a cell-phone camera.
At first, the background is eliminated at a coarse level based on intensity variance.
This makes the foreground components distinct from each other.
Then the non-text components are removed using various characteristic features of text and graphics.
Finally, the text regions are skew corrected and binarized for further processing.
Experimenting with business card images of various resolutions, we have found an optimum performance of 98.54% with 0.75 MP images, that takes 0.17 seconds processing time and 1.1 MB peak memory on a moderately powerful computer (DualCore 1.73 GHz Processor, 1 GB RAM, 1 MB L2 Cache).
The developed technique is computationally efficient and consumes low memory so as to be applicable on mobile devices.
Paper – Extended Range Telepresence for Evacuation Training in Pedestrian Simulations
Today I read a paper titled “Extended Range Telepresence for Evacuation Training in Pedestrian Simulations”
The abstract is:
In this contribution, we propose a new framework to evaluate pedestrian simula-tions by using Extended Range Telepresence.
Telepresence is used as a virtual reality walking simulator, which provides the user with a realistic impression of being present and walking in a virtual environment that is much larger than the real physical environment, in which the user actually walks.
The validation of the simulation is performed by comparing motion data of the telepresent user with simulated data at some points of the simulation.
The use of haptic feedback from the simulation makes the framework suitable for training in emergency situations.
Paper – Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting
Today I read a paper titled “Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting”
The abstract is:
Oriental ink painting, called Sumi-e, is one of the most appealing painting styles that has attracted artists around the world.
Major challenges in computer-based Sumi-e simulation are to abstract complex scene information and draw smooth and natural brush strokes.
To automatically find such strokes, we propose to model the brush as a reinforcement learning agent, and learn desired brush-trajectories by maximizing the sum of rewards in the policy search framework.
We also provide elaborate design of actions, states, and rewards tailored for a Sumi-e agent.
The effectiveness of our proposed approach is demonstrated through simulated Sumi-e experiments.
Paper – Artificial Skin Ridges Enhance Local Tactile Shape Discrimination
Today I read a paper titled “Artificial Skin Ridges Enhance Local Tactile Shape Discrimination”
The abstract is:
One of the fundamental requirements for an artificial hand to successfully grasp and manipulate an object is to be able to distinguish different objects’ shapes and, more specifically, the objects’ surface curvatures.
In this study, we investigate the possibility of enhancing the curvature detection of embedded tactile sensors by proposing a ridged fingertip structure, simulating human fingerprints.
In addition, a curvature detection approach based on machine learning methods is proposed to provide the embedded sensors with the ability to discriminate the surface curvature of different objects.
For this purpose, a set of experiments were carried out to collect tactile signals from a 2 \times 2 tactile sensor array, then the signals were processed and used for learning algorithms.
To achieve the best possible performance for our machine learning approach, three different learning algorithms of Na\”ive Bayes (NB), Artificial Neural Networks (ANN), and Support Vector Machines (SVM) were implemented and compared for various parameters.
Finally, the most accurate method was selected to evaluate the proposed skin structure in recognition of three different curvatures.
The results showed an accuracy rate of 97.5% in surface curvature discrimination.
Paper – Trends and Techniques in Visual Gaze Analysis
Today I read a paper titled “Trends and Techniques in Visual Gaze Analysis”
The abstract is:
Visualizing gaze data is an effective way for the quick interpretation of eye tracking results.
This paper presents a study investigation benefits and limitations of visual gaze analysis among eye tracking professionals and researchers.
The results were used to create a tool for visual gaze analysis within a Master’s project.
Paper – Social Norms for Online Communities
Today I read a paper titled “Social Norms for Online Communities”
The abstract is:
Sustaining cooperation among self-interested agents is critical for the proliferation of emerging online social communities, such as online communities formed through social networking services.
Providing incentives for cooperation in social communities is particularly challenging because of their unique features: a large population of anonymous agents interacting infrequently, having asymmetric interests, and dynamically joining and leaving the community; operation errors; and low-cost reputation whitewashing.
In this paper, taking these features into consideration, we propose a framework for the design and analysis of a class of incentive schemes based on a social norm, which consists of a reputation scheme and a social strategy.
We first define the concept of a sustainable social norm under which every agent has an incentive to follow the social strategy given the reputation scheme.
We then formulate the problem of designing an optimal social norm, which selects a social norm that maximizes overall social welfare among all sustainable social norms.
Using the proposed framework, we study the structure of optimal social norms and the impacts of punishment lengths and whitewashing on optimal social norms.
Our results show that optimal social norms are capable of sustaining cooperation, with the amount of cooperation varying depending on the community characteristics.
Listening – The 2nd Law
This week I am listening to “The 2nd Law” by Muse
Paper – Distributed Self-Organization Of Swarms To Find Globally $ε$-Optimal Routes To Locally Sensed Targets
Today I read a paper titled “Distributed Self-Organization Of Swarms To Find Globally $ε$-Optimal Routes To Locally Sensed Targets”
The abstract is:
The problem of near-optimal distributed path planning to locally sensed targets is investigated in the context of large swarms.
The proposed algorithm uses only information that can be locally queried, and rigorous theoretical results on convergence, robustness, scalability are established, and effect of system parameters such as the agent-level communication radius and agent velocities on global performance is analyzed.
The fundamental philosophy of the proposed approach is to percolate local information across the swarm, enabling agents to indirectly access the global context.
A gradient emerges, reflecting the performance of agents, computed in a distributed manner via local information exchange between neighboring agents.
It is shown that to follow near-optimal routes to a target which can be only sensed locally, and whose location is not known a priori, the agents need to simply move towards its “best” neighbor, where the notion of “best” is obtained by computing the state-specific language measure of an underlying probabilistic finite state automata.
The theoretical results are validated in high-fidelity simulation experiments, with excess of $10^4$ agents.
Read – Agatha Heterodyne and the Hammerless Bell
Today I finished reading “Agatha Heterodyne and the Hammerless Bell” by Phil Foglio
Listening – Synthetica
This week I am listening to “Synthetica” by Metric
Watching – Religulous
Today I watched “Religulous”
Paper – Effects of Initial Stance of Quadruped Trotting on Walking Stability
Today I read a paper titled “Effects of Initial Stance of Quadruped Trotting on Walking Stability”
The abstract is:
It is very important for quadruped walking machine to keep its stability in high speed walking.
It has been indicated that moment around the supporting diagonal line of quadruped in trotting gait largely influences walking stability.
In this paper, moment around the supporting diagonal line of quadruped in trotting gait is modeled and its effects on body attitude are analyzed.
The degree of influence varies with different initial stances of quadruped and we get the optimal initial stance of quadruped in trotting gait with maximal walking stability.
Simulation results are presented.
Keywords: quadruped, trotting, attitude, walking stability.
Paper – A Proposal for Proquints: Identifiers that are Readable, Spellable, and Pronounceable
Today I read a paper titled “A Proposal for Proquints: Identifiers that are Readable, Spellable, and Pronounceable”
The abstract is:
Identifiers (IDs) are pervasive throughout our modern life.
We suggest that these IDs would be easier to manage and remember if they were easily readable, spellable, and pronounceable.
As a solution to this problem we propose using PRO-nouncable QUINT-uplets of alternating unambiguous consonants and vowels: _proquints_.
Watching – Allan Quatermain and the Temple of Skulls
Today I watched “Allan Quatermain and the Temple of Skulls”
Read – The Feynman Lectures on Physics Vol 16
Today I finished reading “The Feynman Lectures on Physics Vol 16” by Richard Feynman
Listening – Wixiw
This week I am listening to “Wixiw” by Liars
Read – Earn What You’re Really Worth
Today I finished reading “Earn What You’re Really Worth: Maximize Your Income at Any Time in Any Market” by Brian Tracy
Paper – Inaccessibility-Inside Theorem for Point in Polygon
Today I read a paper titled “Inaccessibility-Inside Theorem for Point in Polygon”
The abstract is:
The manuscript presents a theoretical proof in conglomeration with new definitions on Inaccessibility and Inside for a point S related to a simple or self intersecting polygon P.
The proposed analytical solution depicts a novel way of solving the point in polygon problem by employing the properties of epigraphs and hypographs, explicitly.
Contrary to the ambiguous solutions given by the cross over for the simple and self intersecting polygons and the solution of a point being multiply inside a self intersecting polygon given by the winding number rule, the current solution gives unambiguous and singular result for both kinds of polygons.
Finally, the current theoretical solution proves to be mathematically correct for simple and self intersecting polygons.
Paper – A network model with structured nodes
Today I read a paper titled “A network model with structured nodes”
The abstract is:
We present a network model in which words over a specific alphabet, called {\it structures}, are associated to each node and undirected edges are added depending on some distance between different structures.
It is shown that this model can generate, without the use of preferential attachment or any other heuristic, networks with topological features similar to biological networks: power law degree distribution, clustering coefficient independent from the network size, etc.
Specific biological networks ({\it C.
Elegans} neural network and {\it E.
Coli} protein-protein interaction network) are replicated using this model.
Read – Introduction to the Theory of Computation
Today I finished reading “Introduction to the Theory of Computation” by Michael Sipser
Read – The Rift
Today I finished reading “The Rift” by Walter Jon Williams
Listening – Nocturne
This week I am listening to “Nocturne” by Wild Nothing
Read – Polly and the Pirates #2: Mystery of the Dragonfish
Today I finished reading “Polly and the Pirates #2: Mystery of the Dragonfish” by Ted Naifeh
Paper – Automatic Recommendation for Online Users Using Web Usage Mining
Today I read a paper titled “Automatic Recommendation for Online Users Using Web Usage Mining”
The abstract is:
A real world challenging task of the web master of an organization is to match the needs of user and keep their attention in their web site.
So, only option is to capture the intuition of the user and provide them with the recommendation list.
Most specifically, an online navigation behavior grows with each passing day, thus extracting information intelligently from it is a difficult issue.
Web master should use web usage mining method to capture intuition.
A WUM is designed to operate on web server logs which contain user’s navigation.
Hence, recommendation system using WUM can be used to forecast the navigation pattern of user and recommend those to user in a form of recommendation list.
In this paper, we propose a two tier architecture for capturing users intuition in the form of recommendation list containing pages visited by user and pages visited by other user’s having similar usage profile.
The practical implementation of proposed architecture and algorithm shows that accuracy of user intuition capturing is improved.
Read – Polly and the Pirates #1
Today I finished reading “Polly and the Pirates #1” by Ted Naifeh
Paper – PageRank Optimization by Edge Selection
Today I read a paper titled “PageRank Optimization by Edge Selection”
The abstract is:
The importance of a node in a directed graph can be measured by its PageRank.
The PageRank of a node is used in a number of application contexts – including ranking websites – and can be interpreted as the average portion of time spent at the node by an infinite random walk.
We consider the problem of maximizing the PageRank of a node by selecting some of the edges from a set of edges that are under our control.
By applying results from Markov decision theory, we show that an optimal solution to this problem can be found in polynomial time.
Our core solution results in a linear programming formulation, but we also provide an alternative greedy algorithm, a variant of policy iteration, which runs in polynomial time, as well.
Finally, we show that, under the slight modification for which we are given mutually exclusive pairs of edges, the problem of PageRank optimization becomes NP-hard.
Watching – Iron Sky
Today I watched “Iron Sky”
Read – Yotsuba&! #03
Today I finished reading “Yotsuba&! #03” by Kiyohiko Azuma
Paper – Moveable objects and applications, based on them
Today I read a paper titled “Moveable objects and applications, based on them”
The abstract is:
The inner views of all our applications are predetermined by the designers; only some non-significant variations are allowed with the help of adaptive interface.
In several programs you can find some moveable objects, but it is an extremely rare thing.
However, the design of applications on the basis of moveable and resizable objects opens an absolutely new way of programming; such applications are much more effective in users’ work, because each user can adjust an application to his purposes.
Programs, using adaptive interface, only implement the designer’s ideas of what would be the best reaction to any of the users’ doings or commands.
Applications on moveable elements do not have such predetermined system of rules; they are fully controlled by the users.
This article describes and demonstrates the new way of applications’ design.
Studying – Pixel art for video games
This month I am studying “Pixel art for video games”
Listening – Attack On Memory
This week I am listening to “Attack On Memory” by Cloud Nothings
Paper – Urologic robots and future directions
Today I read a paper titled “Urologic robots and future directions”
The abstract is:
PURPOSE OF REVIEW: Robot-assisted laparoscopic surgery in urology has gained immense popularity with the daVinci system, but a lot of research teams are working on new robots.
The purpose of this study is to review current urologic robots and present future development directions.
RECENT FINDINGS: Future systems are expected to advance in two directions: improvements of remote manipulation robots and developments of image-guided robots.
SUMMARY: The final goal of robots is to allow safer and more homogeneous outcomes with less variability of surgeon performance, as well as new tools to perform tasks on the basis of medical transcutaneous imaging, in a less invasive way, at lower costs.
It is expected that improvements for a remote system could be augmented in reality, with haptic feedback, size reduction, and development of new tools for natural orifice translumenal endoscopic surgery.
The paradigm of image-guided robots is close to clinical availability and the most advanced robots are presented with end-user technical assessments.
It is also notable that the potential of robots lies much further ahead than the accomplishments of the daVinci system.
The integration of imaging with robotics holds a substantial promise, because this can accomplish tasks otherwise impossible.
Image-guided robots have the potential to offer a paradigm shift.
Paper – Deployment of mobile routers ensuring coverage and connectivity
Today I read a paper titled “Deployment of mobile routers ensuring coverage and connectivity”
The abstract is:
Maintaining connectivity among a group of autonomous agents exploring an area is very important, as it promotes cooperation between the agents and also helps message exchanges which are very critical for their mission.
Creating an underlying Ad-hoc Mobile Router Network (AMRoNet) using simple robotic routers is an approach that facilitates communication between the agents without restricting their movements.
We address the following question in our paper: How to create an AMRoNet with local information and with minimum number of routers? We propose two new localized and distributed algorithms 1) agent-assisted router deployment and 2) a self-spreading for creating AMRoNet.
The algorithms use a greedy deployment strategy for deploying routers effectively into the area maximizing coverage and a triangular deployment strategy to connect different connected component of routers from different base stations.
Empirical analysis shows that the proposed algorithms are the two best localized approaches to create AMRoNets.
Read – Maximum Ride #6
Today I finished reading “Maximum Ride #6” by James Patterson
Paper – What Stops Social Epidemics?
Today I read a paper titled “What Stops Social Epidemics?”
The abstract is:
Theoretical progress in understanding the dynamics of spreading processes on graphs suggests the existence of an epidemic threshold below which no epidemics form and above which epidemics spread to a significant fraction of the graph.
We have observed information cascades on the social media site Digg that spread fast enough for one initial spreader to infect hundreds of people, yet end up affecting only 0.1% of the entire network.
We find that two effects, previously studied in isolation, combine cooperatively to drastically limit the final size of cascades on Digg.
First, because of the highly clustered structure of the Digg network, most people who are aware of a story have been exposed to it via multiple friends.
This structure lowers the epidemic threshold while moderately slowing the overall growth of cascades.
In addition, we find that the mechanism for social contagion on Digg points to a fundamental difference between information spread and other contagion processes: despite multiple opportunities for infection within a social group, people are less likely to become spreaders of information with repeated exposure.
The consequences of this mechanism become more pronounced for more clustered graphs.
Ultimately, this effect severely curtails the size of social epidemics on Digg.
Paper – A distributed Approach for Access and Visibility Task with a Manikin and a Robot in a Virtual Reality Environment
Today I read a paper titled “A distributed Approach for Access and Visibility Task with a Manikin and a Robot in a Virtual Reality Environment”
The abstract is:
This paper presents a new method, based on a multi-agent system and on a digital mock-up technology, to assess an efficient path planner for a manikin or a robot for access and visibility task taking into account ergonomic constraints or joint and mechanical limits.
In order to solve this problem, the human operator is integrated in the process optimization to contribute to a global perception of the environment.
This operator cooperates, in real-time, with several automatic local elementary agents.
The result of this work validates solutions through the digital mock-up; it can be applied to simulate maintenability and mountability tasks.
Watching – Little Big Soldier
Today I watched “Little Big Soldier”