This week I am listening to “Champ” by Tokyo Police Club
Read – Ruby on Rails 3 Tutorial: Learn Rails by Example
Today I finished reading “Ruby on Rails 3 Tutorial: Learn Rails by Example” by Michael Hartl
Paper – A Wavelet-Based Digital Watermarking for Video
Today I read a paper titled “A Wavelet-Based Digital Watermarking for Video”
The abstract is:
A novel video watermarking system operating in the three dimensional wavelet transform is here presented.
Specifically the video sequence is partitioned into spatio temporal units and the single shots are projected onto the 3D wavelet domain.
First a grayscale watermark image is decomposed into a series of bitplanes that are preprocessed with a random location matrix.
After that the preprocessed bitplanes are adaptively spread spectrum and added in 3D wavelet coefficients of the video shot.
Our video watermarking algorithm is robust against the attacks of frame dropping, averaging and swapping.
Furthermore, it allows blind retrieval of embedded watermark which does not need the original video and the watermark is perceptually invisible.
The algorithm design, evaluation, and experimentation of the proposed scheme are described in this paper.
Paper – Multi-camera Realtime 3D Tracking of Multiple Flying Animals
Today I read a paper titled “Multi-camera Realtime 3D Tracking of Multiple Flying Animals”
The abstract is:
Automated tracking of animal movement allows analyses that would not otherwise be possible by providing great quantities of data.
The additional capability of tracking in realtime – with minimal latency – opens up the experimental possibility of manipulating sensory feedback, thus allowing detailed explorations of the neural basis for control of behavior.
Here we describe a new system capable of tracking the position and body orientation of animals such as flies and birds.
The system operates with less than 40 msec latency and can track multiple animals simultaneously.
To achieve these results, a multi target tracking algorithm was developed based on the Extended Kalman Filter and the Nearest Neighbor Standard Filter data association algorithm.
In one implementation, an eleven camera system is capable of tracking three flies simultaneously at 60 frames per second using a gigabit network of nine standard Intel Pentium 4 and Core 2 Duo computers.
This manuscript presents the rationale and details of the algorithms employed and shows three implementations of the system.
An experiment was performed using the tracking system to measure the effect of visual contrast on the flight speed of Drosophila melanogaster.
At low contrasts, speed is more variable and faster on average than at high contrasts.
Thus, the system is already a useful tool to study the neurobiology and behavior of freely flying animals.
If combined with other techniques, such as `virtual reality’-type computer graphics or genetic manipulation, the tracking system would offer a powerful new way to investigate the biology of flying animals.
Paper – Code Similarity on High Level Programs
Today I read a paper titled “Code Similarity on High Level Programs”
The abstract is:
This paper presents a new approach for code similarity on High Level programs.
Our technique is based on Fast Dynamic Time Warping, that builds a warp path or points relation with local restrictions.
The source code is represented into Time Series using the operators inside programming languages that makes possible the comparison.
This makes possible subsequence detection that represent similar code instructions.
In contrast with other code similarity algorithms, we do not make features extraction.
The experiments show that two source codes are similar when their respective Time Series are similar.
Read – Project Management Secrets
Today I finished reading “Project Management Secrets” by Matthew Batchelor
Listening – Hadestown
This week I am listening to “Hadestown” by Anaïs Mitchell
Paper – A New Clustering Algorithm Based Upon Flocking On Complex Network
Today I read a paper titled “A New Clustering Algorithm Based Upon Flocking On Complex Network”
The abstract is:
We have proposed a model based upon flocking on a complex network, and then developed two clustering algorithms on the basis of it.
In the algorithms, firstly a \textit{k}-nearest neighbor (knn) graph as a weighted and directed graph is produced among all data points in a dataset each of which is regarded as an agent who can move in space, and then a time-varying complex network is created by adding long-range links for each data point.
Furthermore, each data point is not only acted by its \textit{k} nearest neighbors but also \textit{r} long-range neighbors through fields established in space by them together, so it will take a step along the direction of the vector sum of all fields.
It is more important that these long-range links provides some hidden information for each data point when it moves and at the same time accelerate its speed converging to a center.
As they move in space according to the proposed model, data points that belong to the same class are located at a same position gradually, whereas those that belong to different classes are away from one another.
Consequently, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the rates of convergence of clustering algorithms are fast enough.
Moreover, the comparison with other algorithms also provides an indication of the effectiveness of the proposed approach.
Paper – On the Efficiency of Strategies for Subdividing Polynomial Triangular Surface Patches
Today I read a paper titled “On the Efficiency of Strategies for Subdividing Polynomial Triangular Surface Patches”
The abstract is:
In this paper, we investigate the efficiency of various strategies for subdividing polynomial triangular surface patches.
We give a simple algorithm performing a regular subdivision in four calls to the standard de Casteljau algorithm (in its subdivision version).
A naive version uses twelve calls.
We also show that any method for obtaining a regular subdivision using the standard de Casteljau algorithm requires at least 4 calls.
Thus, our method is optimal.
We give another subdivision algorithm using only three calls to the de Casteljau algorithm.
Instead of being regular, the subdivision pattern is diamond-like.
Finally, we present a “spider-like” subdivision scheme producing six subtriangles in four calls to the de Casteljau algorithm.
Paper – Jamming in complex networks with degree correlation
Today I read a paper titled “Jamming in complex networks with degree correlation”
The abstract is:
We study the effects of the degree-degree correlations on the pressure congestion J when we apply a dynamical process on scale free complex networks using the gradient network approach.
We find that the pressure congestion for disassortative (assortative) networks is lower (bigger) than the one for uncorrelated networks which allow us to affirm that disassortative networks enhance transport through them.
This result agree with the fact that many real world transportation networks naturally evolve to this kind of correlation.
We explain our results showing that for the disassortative case the clusters in the gradient network turn out to be as much elongated as possible, reducing the pressure congestion J and observing the opposite behavior for the assortative case.
Finally we apply our model to real world networks, and the results agree with our theoretical model.
Paper – Predicting relevant empty spots in social interaction
Today I read a paper titled “Predicting relevant empty spots in social interaction”
The abstract is:
An empty spot refers to an empty hard-to-fill space which can be found in the records of the social interaction, and is the clue to the persons in the underlying social network who do not appear in the records.
This contribution addresses a problem to predict relevant empty spots in social interaction.
Homogeneous and inhomogeneous networks are studied as a model underlying the social interaction.
A heuristic predictor function approach is presented as a new method to address the problem.
Simulation experiment is demonstrated over a homogeneous network.
A test data in the form of baskets is generated from the simulated communication.
Precision to predict the empty spots is calculated to demonstrate the performance of the presented approach.
Paper – Fundamentals of Mathematical Theory of Emotional Robots
Today I read a paper titled “Fundamentals of Mathematical Theory of Emotional Robots”
The abstract is:
In this book we introduce a mathematically formalized concept of emotion, robot’s education and other psychological parameters of intelligent robots.
We also introduce unitless coefficients characterizing an emotional memory of a robot.
Besides, the effect of a robot’s memory upon its emotional behavior is studied, and theorems defining fellowship and conflicts in groups of robots are proved.
Also unitless parameters describing emotional states of those groups are introduced, and a rule of making alternative (binary) decisions based on emotional selection is given.
We introduce a concept of equivalent educational process for robots and a concept of efficiency coefficient of an educational process, and suggest an algorithm of emotional contacts within a group of robots.
And generally, we present and describe a model of a virtual reality with emotional robots.
The book is meant for mathematical modeling specialists and emotional robot software developers.
The Greatest Story Never Sold
Pepe looked up from his work, staring thoughtfully at the middle distance.
“I’m telling ya, people don’t write anymore” he stated forcefully, his old voice unwavering. “Was a time when you could read a story and actually make sense of it all.”
“It had a beginning, a middle, and an end. Most of the time. Now it’s just babble to satisfy the machines.”
He returned to re-arranging bright red, Coca-Cola bottle caps on a small work table in front of him, his old eyes scanning their shiny surfaces for the slightest imperfections, his strangely smooth hands and delicately manicured fingernails expertly flipping over entire rows of caps to ensure no edges had been bent or become misshapen.
“They say everyone has a book in them. Well ain’t that the truth. These days everyone does have a book in them, or three, or four. Time now is you can’t talk to anyone, anywhere without them mentioning their latest fancifully imagined tell-all memoir about a life they never had or their ridiculous fantasy trilogy that mines every popular Westernized myth ever posted on our information super highway.” He snorted derisively at the statement, using the expression to show his distaste for everything that the internet had done to his career.
“Don’t just pour them out, they scratch. How many times do I have to tell you not to do that? Those caps are worth money, worth more than my words. People will pay good money for pristine vintage caps.”
“I only ever had one book in me. Never got to find out if I had more. Oh, I wrote, lots of paid for words, never stopped writing right up until the end. It was the end when it all went wrong. People stopped writing to be read.” Pepe sighed at the memory, pulled out several damaged bottle caps to one side, sliding them silently across the green baize of the work table and continued to sort.
“Did you ever read any of the greats?” He didn’t wait for an answer. “Niven, Scalzi, Bear. They were some of my favourites when I was young. Younger, anyway. You can still find them if you look around, though nobody carries them anymore. They just disappeared when people starting writing for themselves.”
“If you can ever find one of theirs, even if it’s illegal, you grab it and you read it.”
“My book? My book had action in it, it was a thriller, very sophisticated, you never knew what was going to happen until the very end. Very important that, in a thriller, keep your readers guessing.”
“Why? Because…. Because! That’s why! You don’t want to reveal what’s going on in your story until its necessary to do so, keeps the reader’s interest. Makes them want to read some more. Makes it a real page turner.”
“No, I guess you wouldn’t know where that expression came from. Page forwarder then.” Pepe sighed exasperatedly at this companion. “Fine, make you want to download the next book.”
“We had editors and proof readers that would make sure the crap stayed out. Most of the time they made sure the crap stayed out anyway.”
“Look, the moment that people could self-publish with ease it began to kill off those kinds of jobs. ‘We don’t need no stinking editors acting as gate-keepers’ they would say. Well, yes, yes you do.”
“Pretty soon we were all acting as editors and proof-readers and who has the time for that? The slush pile growing ever higher. We needed a service where someone could read through the crap, decide what was worth paying attention to, and then tell us what was worth reading.”
More bottle caps. More sorting. Moving to the flat above the pub had its benefits.
“The service? We already had that, it was called the publishing industry. But new writers, always eager to get published, didn’t want those gate keepers. Kept them out of being published is what they would tell you.”
Pepe sat back, wet his lips, the recognizable sign that a long monologue was coming.
“Of course it did, who would want to read their crap! Pretty soon some clever people wrote algorithms that would analyze what had been written, proof read it, edit it, automatically correct it, and then let you publish it to be read. Didn’t matter that you couldn’t write worth shit. The software could fix up your poorly chosen phrases and incorrect word usages. But then people started getting lazy. Lazier than normal I mean. They started writing their words to fit what the machines wanted to see. Writers began gaming the system, making their work match what the machine expected to see, making their work score the highest possible score the machine could give. If your book got a high score by the machine, in the early days, it became an almost instant best seller, even without marketing and promotion. People paid attention to the machines. The brilliant software algorithm was being gamed by the not so brilliant but far more cunning writers. The people in power, the editors, updated the algorithms to prevent the gaming from taking place. The writers adapted to the new algorithms. It didn’t take long. Oh it didn’t happen literally overnight, six months, maybe a year, and now those same bad authors pushing out rubbish were scoring high marks by the algorithms again. Then came the smart adaptive algorithms that could evolve their style to the whims of the market, to pick and choose what was wanted based on a pool of readers and what they were buying. And of course, the writers adapted almost as fast as the software, everyone was looking for an edge. It’s a spiral, its unsustainable, pretty soon everyone who can write has given up and everyone else who wants to write has started composing, and I use that word loosely, ‘composing’ pure drivel to satisfy some adaptive algorithm. And the readers, hah, you blame the writers, but the readers are buying the books. Well, you cannot really call them readers can you? They download and collect thousands of books, more than you can read in a lifetime, on little devices that can store the Library of Congress ten times over.”
“What? Don’t mumble. I hear fine, but mumbling is as bad as bad writing.”
“That was a measurement we had once. A Library of Congress. It was tens of thousands of books and other written works that mattered, all stored in a single place. Now we tote around more information in our hand than an entire generation of people could read or care to read. The readers aren’t reading. They’re collecting. Collecting utterly pure drivel.”
“Here, let me show you what I was working on for the past year. It took me an entire year to write this book. Yes, yes, I know most people do it in a few days at most. What’s wrong with me? Nothing. It took me over ten months to perfect the algorithm, and then the book was written in just a few seconds. I uploaded it to three of the biggest distribution channels in the world just a few hours ago. See these numbers? This book of mine is outselling the closest competitor by a wide margin, almost five to one in some regions.”
“Let me look at the analytics. Well would you look at that, says not only are people buying it, but real people are reading it too. There’s a chap here in Bangor that has gotten up to page 60 already! Now that’s dedication, 60 pages in just a hair under four hours. That’s unheard of in today’s readership.”
“Looks like the automated foreign translations are doing incredibly well. And the machine generated reviews from the New York Times and the Guardian have given me quite a boost in sales too. What’s the book about? How should I know! I never wrote it. Here, let me see what the reviews say: “One of the most powerful exposes on political intrigue in America in this decade.” I guess it’s a political book. Oh, this automated review gave me really good marks, “9.5 out of ten, if Tolkien and George R.R. Martin had a love child book, this would be it.” Um, yes, well, I guess it is sort of a cross-genre fantasy political book. “The first in what will prove to be a deciding trilogy of some of the greatest work this century.” I guess I need to run my application again as apparently I have written the first part of a trilogy.”
“Package those up. Do it carefully. Turn out the table lights when you’re done. I might be a New York Times bestselling author today, but by tomorrow, those vintage bottle caps are still going to be worth more than my words. And will be around longer too. I need to see what my new book is about…”
First draft in October 1997.
First published in print March 2009.
Copyright 1997 Justin Lloyd
Listening – Passive Me, Aggressive You
This week I am listening to “Passive Me, Aggressive You” by New Zealand The Naked And Famous
Listening – Opus Eponymous
This week I am listening to “Opus Eponymous” by Ghost
Paper – The “Unfriending” Problem: The Consequences of Homophily in Friendship Retention for Causal Estimates of Social Influence
Today I read a paper titled “The “Unfriending” Problem: The Consequences of Homophily in Friendship Retention for Causal Estimates of Social Influence”
The abstract is:
An increasing number of scholars are using longitudinal social network data to try to obtain estimates of peer or social influence effects.
These data may provide additional statistical leverage, but they can introduce new inferential problems.
In particular, while the confounding effects of homophily in friendship formation are widely appreciated, homophily in friendship retention may also confound causal estimates of social influence in longitudinal network data.
We provide evidence for this claim in a Monte Carlo analysis of the statistical model used by Christakis, Fowler, and their colleagues in numerous articles estimating “contagion” effects in social networks.
Our results indicate that homophily in friendship retention induces significant upward bias and decreased coverage levels in the Christakis and Fowler model if there is non-negligible friendship attrition over time.
Paper – Personal applications, based on moveable / resizable elements
Today I read a paper titled “Personal applications, based on moveable / resizable elements”
The abstract is:
All the modern day applications have the interface, absolutely defined by the developers.
The use of adaptive interface or dynamic layout allows some variations, but even all of them are predetermined on the design stage, because the best reaction (from designer’s view) on any possible users’ movement was hardcoded.
But there is a different world of applications, totally constructed on moveable / resizable elements; such applications turn the full control to the users.
The crucial thing in such programs is that not something but everything must become moveable and resizable.
This article describes the features of such applications and the algorithm behind their design.
Paper – Mining Meaning from Wikipedia
Today I read a paper titled “Mining Meaning from Wikipedia”
The abstract is:
Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility.
It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks.
This article provides a comprehensive description of this work.
It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building.
The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources.
We identify the research groups and individuals involved, and how their work has developed in the last few years.
We provide a comprehensive list of the open-source software they have produced.
Studying – Creating dada posters
This month I am studying “Creating dada posters”
Listening – Sir Lucious Left Foot: The Son Of Chico Dusty
This week I am listening to “Sir Lucious Left Foot: The Son Of Chico Dusty” by Big Boi
Paper – Neural networks in 3D medical scan visualization
Today I read a paper titled “Neural networks in 3D medical scan visualization”
The abstract is:
For medical volume visualization, one of the most important tasks is to reveal clinically relevant details from the 3D scan (CT, MRI …), e.g.
the coronary arteries, without obscuring them with less significant parts.
These volume datasets contain different materials which are difficult to extract and visualize with 1D transfer functions based solely on the attenuation coefficient.
Multi-dimensional transfer functions allow a much more precise classification of data which makes it easier to separate different surfaces from each other.
Unfortunately, setting up multi-dimensional transfer functions can become a fairly complex task, generally accomplished by trial and error.
This paper explains neural networks, and then presents an efficient way to speed up visualization process by semi-automatic transfer function generation.
We describe how to use neural networks to detect distinctive features shown in the 2D histogram of the volume data and how to use this information for data classification.
Read – How Successful People Think
Today I finished reading “How Successful People Think: Change Your Thinking, Change Your Life” by John Maxwell
Paper – Being Rational or Aggressive? A Revisit to Dunbar’s Number in Online Social Networks
Today I read a paper titled “Being Rational or Aggressive? A Revisit to Dunbar’s Number in Online Social Networks”
The abstract is:
Recent years have witnessed the explosion of online social networks (OSNs).
They provide powerful IT-innovations for online social activities such as organizing contacts, publishing contents, and sharing interests between friends who may never meet before.
As more and more people become the active users of online social networks, one may ponder questions such as: (1) Do OSNs indeed improve our sociability? (2) To what extent can we expand our offline social spectrum in OSNs? (3) Can we identify some interesting user behaviors in OSNs? Our work in this paper just aims to answer these interesting questions.
To this end, we pay a revisit to the well-known Dunbar’s number in online social networks.
Our main research contributions are as follows.
First, to our best knowledge, our work is the first one that systematically validates the existence of the online Dunbar’s number in the range of [200,300].
To reach this, we combine using local-structure analysis and user-interaction analysis for extensive real-world OSNs.
Second, we divide OSNs users into two categories: rational and aggressive, and find that rational users intend to develop close and reciprocated relationships, whereas aggressive users have no consistent behaviors.
Third, we build a simple model to capture the constraints of time and cognition that affect the evolution of online social networks.
Finally, we show the potential use of our findings in viral marketing and privacy management in online social networks.
Read – No Excuses!
Today I finished reading “No Excuses!: The Power of Self-Discipline” by Brian Tracy
Paper – Detecting the Most Unusual Part of a Digital Image
Today I read a paper titled “Detecting the Most Unusual Part of a Digital Image”
The abstract is:
The purpose of this paper is to introduce an algorithm that can detect the most unusual part of a digital image.
The most unusual part of a given shape is defined as a part of the image that has the maximal distance to all non intersecting shapes with the same form.
The method can be used to scan image databases with no clear model of the interesting part or large image databases, as for example medical databases.
Listening – There Is Love In You
This week I am listening to “There Is Love In You” by Four Tet
Paper – Intrusion Detection Using Cost-Sensitive Classification
Today I read a paper titled “Intrusion Detection Using Cost-Sensitive Classification”
The abstract is:
Intrusion Detection is an invaluable part of computer networks defense.
An important consideration is the fact that raising false alarms carries a significantly lower cost than not detecting at- tacks.
For this reason, we examine how cost-sensitive classification methods can be used in Intrusion Detection systems.
The performance of the approach is evaluated under different experimental conditions, cost matrices and different classification models, in terms of expected cost, as well as detection and false alarm rates.
We find that even under unfavourable conditions, cost-sensitive classification can improve performance significantly, if only slightly.
Read – Wuthering Heights
Today I finished reading “Wuthering Heights” by Emily Brontë
Read – The Cathedral and the Bazaar
Today I finished reading “The Cathedral and the Bazaar” by NOT A BOOK
Paper – Thermodynamics of Information Retrieval
Today I read a paper titled “Thermodynamics of Information Retrieval”
The abstract is:
In this work, we suggest a parameterized statistical model (the gamma distribution) for the frequency of word occurrences in long strings of English text and use this model to build a corresponding thermodynamic picture by constructing the partition function.
We then use our partition function to compute thermodynamic quantities such as the free energy and the specific heat.
In this approach, the parameters of the word frequency model vary from word to word so that each word has a different corresponding thermodynamics and we suggest that differences in the specific heat reflect differences in how the words are used in language, differentiating keywords from common and function words.
Finally, we apply our thermodynamic picture to the problem of retrieval of texts based on keywords and suggest some advantages over traditional information retrieval methods.
Listening – The Promise
This week I am listening to “The Promise” by Bruce Springsteen
Read – Waverley
Today I finished reading “Waverley” by Walter Scott
Paper – The Accelerating Growth of Online Tagging Systems
Today I read a paper titled “The Accelerating Growth of Online Tagging Systems”
The abstract is:
Research on the growth of online tagging systems not only is interesting in its own right, but also yields insights for website management and semantic web analysis.
Traditional models that describing the growth of online systems can be divided between linear and nonlinear versions.
Linear models, including the BA model (Brabasi and Albert, 1999), assume that the average activity of users is a constant independent of population.
Hence the total activity is a linear function of population.
On the contrary, nonlinear models suggest that the average activity is affected by the size of the population and the total activity is a nonlinear function of population.
In the current study, supporting evidences for the nonlinear growth assumption are obtained from data on Internet users’ tagging behavior.
A power law relationship between the number of new tags (F) and the population (P), which can be expressed as F ~ P ^ gamma (gamma > 1), is found.
I call this pattern accelerating growth and find it relates the to time-invariant heterogeneity in individual activities.
I also show how a greater heterogeneity leads to a faster growth.
Read – The Feynman Lectures on Physics Vol 5
Today I finished reading “The Feynman Lectures on Physics Vol 5: On Fundamentals/Energy & Motion” by Richard Feynman
Read – Game Programming Gems 8
Today I finished reading “Game Programming Gems 8” by Adam Lake
Read – Problem Identified: And You’re Probably Not Part of the Solution
Today I finished reading “Problem Identified: And You’re Probably Not Part of the Solution” by Scott Adams
Paper – Interactive Hatching and Stippling by Example
Today I read a paper titled “Interactive Hatching and Stippling by Example”
The abstract is:
We describe a system that lets a designer interactively draw patterns of strokes in the picture plane, then guide the synthesis of similar patterns over new picture regions.
Synthesis is based on an initial user-assisted analysis phase in which the system recognizes distinct types of strokes (hatching and stippling) and organizes them according to perceptual grouping criteria.
The synthesized strokes are produced by combining properties (eg.
length, orientation, parallelism, proximity) of the stroke groups extracted from the input examples.
We illustrate our technique with a drawing application that allows the control of attributes and scale-dependent reproduction of the synthesized patterns.
Paper – Assessing Cognitive Load on Web Search Tasks
Today I read a paper titled “Assessing Cognitive Load on Web Search Tasks”
The abstract is:
Assessing cognitive load on web search is useful for characterizing search system features and search tasks with respect to their demands on the searcher’s mental effort.
It is also helpful for examining how individual differences among searchers (e.g.
cognitive abilities) affect the search process.
We examined cognitive load from the perspective of primary and secondary task performance.
A controlled web search study was conducted with 48 participants.
The primary task performance components were found to be significantly related to both the objective and the subjective task difficulty.
However, the relationship between objective and subjective task difficulty and the secondary task performance measures was weaker than expected.
The results indicate that the dual-task approach needs to be used with caution.
Listening – Infinite Arms
This week I am listening to “Infinite Arms” by Band Of Horses
Paper – Simulating Spiking Neural P systems without delays using GPUs
Today I read a paper titled “Simulating Spiking Neural P systems without delays using GPUs”
The abstract is:
We present in this paper our work regarding simulating a type of P system known as a spiking neural P system (SNP system) using graphics processing units (GPUs).
GPUs, because of their architectural optimization for parallel computations, are well-suited for highly parallelizable problems.
Due to the advent of general purpose GPU computing in recent years, GPUs are not limited to graphics and video processing alone, but include computationally intensive scientific and mathematical applications as well.
Moreover P systems, including SNP systems, are inherently and maximally parallel computing models whose inspirations are taken from the functioning and dynamics of a living cell.
In particular, SNP systems try to give a modest but formal representation of a special type of cell known as the neuron and their interactions with one another.
The nature of SNP systems allowed their representation as matrices, which is a crucial step in simulating them on highly parallel devices such as GPUs.
The highly parallel nature of SNP systems necessitate the use of hardware intended for parallel computations.
The simulation algorithms, design considerations, and implementation are presented.
Finally, simulation results, observations, and analyses using an SNP system that generates all numbers in $\mathbb N$ – {1} are discussed, as well as recommendations for future work.
Paper – Analytic treatment of the network synchronization problem with time delays
Today I read a paper titled “Analytic treatment of the network synchronization problem with time delays”
The abstract is:
Motivated by novel results in the theory of network synchronization, we analyze the effects of nonzero time delays in stochastic synchronization problems with linear couplings in an arbitrary network.
We determine {\it analytically} the fundamental limit of synchronization efficiency in a noisy environment with uniform time delays.
We show that the optimal efficiency of the network is achieved for $\lambda\tau={{\pi^{3/2}}\over{2\sqrt{\pi}+4}}\approx0.738$, where $\lambda$ is the coupling strength (relaxation coefficient) and $\tau$ is the characteristic time delay in the communication between pairs of nodes.
Our analysis reveals the underlying mechanism responsible for the trade-off phenomena observed in recent numerical simulations of network synchronization problems.
Listening – Heligoland
This week I am listening to “Heligoland” by Massive Attack
Paper – Subjective Collaborative Filtering
Today I read a paper titled “Subjective Collaborative Filtering”
The abstract is:
We present an item-based approach for collaborative filtering.
We determine a list of recommended items for a user by considering their previous purchases.
Additionally other features of the users could be considered such as page views, search queries, etc…
In particular we address the problem of efficiently comparing items.
Our algorithm can efficiently approximate an estimate of the similarity between two items.
As measure of similarity we use an approximation of the Jaccard similarity that can be computed by constant time operations and one bitwise OR.
Moreover we improve the accuracy of the similarity by introducing the concept of user preference for a given product, which both takes into account multiple purchases and purchases of related items.
The product of the user preference and the Jaccard measure (or its approximation) is used as a score for deciding whether a given product has to be recommended.
Read – Programming Windows® Phone 7
Today I finished reading “Programming Windows® Phone 7” by Charles Petzold
Studying – Digital inking in Photoshop
This month I am studying “Digital inking in Photoshop”
Listening – Learning
This week I am listening to “Learning” by Perfume Genius
Paper – Digital Restoration of Ancient Papyri
Today I read a paper titled “Digital Restoration of Ancient Papyri”
The abstract is:
Image processing can be used for digital restoration of ancient papyri, that is, for a restoration performed on their digital images.
The digital manipulation allows reducing the background signals and enhancing the readability of texts.
In the case of very old and damaged documents, this is fundamental for identification of the patterns of letters.
Some examples of restoration, obtained with an image processing which uses edges detection and Fourier filtering, are shown.
One of them concerns 7Q5 fragment of the Dead Sea Scrolls.
Paper – Extensive Games with Possibly Unaware Players
Today I read a paper titled “Extensive Games with Possibly Unaware Players”
The abstract is:
Standard game theory assumes that the structure of the game is common knowledge among players.
We relax this assumption by considering extensive games where agents may be unaware of the complete structure of the game.
In particular, they may not be aware of moves that they and other agents can make.
We show how such games can be represented; the key idea is to describe the game from the point of view of every agent at every node of the game tree.
We provide a generalization of Nash equilibrium and show that every game with awareness has a generalized Nash equilibrium.
Finally, we extend these results to games with awareness of unawareness, where a player i may be aware that a player j can make moves that i is not aware of, and to subjective games, where payers may have no common knowledge regarding the actual game and their beliefs are incompatible with a common prior.
Paper – A 8 bits Pipeline Analog to Digital Converter Design for High Speed Camera Application
Today I read a paper titled “A 8 bits Pipeline Analog to Digital Converter Design for High Speed Camera Application”
The abstract is:
– This paper describes a pipeline analog-to-digital converter is implemented for high speed camera.
In the pipeline ADC design, prime factor is designing operational amplifier with high gain so ADC have been high speed.
The other advantage of pipeline is simple on concept, easy to implement in layout and have flexibility to increase speed.
We made design and simulation using Mentor Graphics Software with 0.6 \mu m CMOS technology with a total power dissipation of 75.47 mW.
Circuit techniques used include a precise comparator, operational amplifier and clock management.
A switched capacitor is used to sample and multiplying at each stage.
Simulation a worst case DNL and INL of 0.75 LSB.
The design operates at 5 V dc.
The ADC achieves a SNDR of 44.86 dB.
keywords: pipeline, switched capacitor, clock management .
Paper – Self-Assembly with Geometric Tiles
Today I read a paper titled “Self-Assembly with Geometric Tiles”
The abstract is:
In this work we propose a generalization of Winfree’s abstract Tile Assembly Model (aTAM) in which tile types are assigned rigid shapes, or geometries, along each tile face.
We examine the number of distinct tile types needed to assemble shapes within this model, the temperature required for efficient assembly, and the problem of designing compact geometric faces to meet given compatibility specifications.
Our results show a dramatic decrease in the number of tile types needed to assemble $n \times n$ squares to $\Theta(\sqrt{\log n})$ at temperature 1 for the most simple model which meets a lower bound from Kolmogorov complexity, and $O(\log\log n)$ in a model in which tile aggregates must move together through obstacle free paths within the plane.
This stands in contrast to the $\Theta(\log n / \log\log n)$ tile types at temperature 2 needed in the basic aTAM.
We also provide a general method for simulating a large and computationally universal class of temperature 2 aTAM systems with geometric tiles at temperature 1.
Finally, we consider the problem of computing a set of compact geometric faces for a tile system to implement a given set of compatibility specifications.
We show a number of bounds on the complexity of geometry size needed for various classes of compatibility specifications, many of which we directly apply to our tile assembly results to achieve non-trivial reductions in geometry size.