Today I finished reading “The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman” by Richard Feynman
Paper – The Role of Time in the Creation of Knowledge
Today I read a paper titled “The Role of Time in the Creation of Knowledge”
The abstract is:
This paper I assume that in humans the creation of knowledge depends on a discrete time, or stage, sequential decision-making process subjected to a stochastic, information transmitting environment.
For each time-stage, this environment randomly transmits Shannon type information-packets to the decision-maker, who examines each of them for relevancy and then determines his optimal choices.
Using this set of relevant information-packets, the decision-maker adapts, over time, to the stochastic nature of his environment, and optimizes the subjective expected rate-of-growth of knowledge.
The decision-maker’s optimal actions, lead to a decision function that involves, over time, his view of the subjective entropy of the environmental process and other important parameters at each time-stage of the process.
Using this model of human behavior, one could create psychometric experiments using computer simulation and real decision-makers, to play programmed games to measure the resulting human performance.
Paper – Anisotropic selection in cellular genetic algorithms
Today I read a paper titled “Anisotropic selection in cellular genetic algorithms”
The abstract is:
In this paper we introduce a new selection scheme in cellular genetic algorithms (cGAs).
Anisotropic Selection (AS) promotes diversity and allows accurate control of the selective pressure.
First we compare this new scheme with the classical rectangular grid shapes solution according to the selective pressure: we can obtain the same takeover time with the two techniques although the spreading of the best individual is different.
We then give experimental results that show to what extent AS promotes the emergence of niches that support low coupling and high cohesion.
Finally, using a cGA with anisotropic selection on a Quadratic Assignment Problem we show the existence of an anisotropic optimal value for which the best average performance is observed.
Further work will focus on the selective pressure self-adjustment ability provided by this new selection scheme.
Paper – Mathematical models of the complex surfaces in simulation and visualization systems
Today I read a paper titled “Mathematical models of the complex surfaces in simulation and visualization systems”
The abstract is:
Modeling, simulation and visualization of three-dimension complex bodies widely use mathematical model of curves and surfaces.
The most important curves and surfaces for these purposes are curves and surfaces in Hermite and Bezier forms, splines and NURBS.
Article is devoted to survey this way to use geometrical data in various computer graphics systems and adjacent fields.
Read – Perfect Phrases for Writing Company Announcements
Today I finished reading “Perfect Phrases for Writing Company Announcements: Hundreds of Ready-to-Use Phrases for Powerful Internal and External Communications” by Harriet Diamond
Listening – This Is Happening
This week I am listening to “This Is Happening” by LCD Soundsystem
Read – How Rich People Think
Today I finished reading “How Rich People Think” by Steve Siebold
Studying – Growing a business with content marketing
This month I am studying “Growing a business with content marketing”
I’ve always loved content marketing and think it is an incredibly powerful technique for building up authority.
This is an in-person three day class.
Update: Logged 29 hours of class time and extra exercises.
Listening – American Slang
This week I am listening to “American Slang” by The Gaslight Anthem
Read – Version Control with Git
Today I finished reading “Version Control with Git” by Jon Loeliger
Paper – Extraction of cartographic objects in high resolution satellite images for object model generation
Today I read a paper titled “Extraction of cartographic objects in high resolution satellite images for object model generation”
The abstract is:
The aim of this study is to detect man-made cartographic objects in high-resolution satellite images.
New generation satellites offer a sub-metric spatial resolution, in which it is possible (and necessary) to develop methods at object level rather than at pixel level, and to exploit structural features of objects.
With this aim, a method to generate structural object models from manually segmented images has been developed.
To generate the model from non-segmented images, extraction of the objects from the sample images is required.
A hybrid method of extraction (both in terms of input sources and segmentation algorithms) is proposed: A region based segmentation is applied on a 10 meter resolution multi-spectral image.
The result is used as marker in a “marker-controlled watershed method using edges” on a 2.5 meter resolution panchromatic image.
Very promising results have been obtained even on images where the limits of the target objects are not apparent.
Social media is a noise making machine
I find that most people on social media make an awful lot of noise but rarely have much to say.
Read – The God Engines
Today I finished reading “The God Engines” by John Scalzi
Listening – A Thousand Suns
This week I am listening to “A Thousand Suns” by Linkin Park
Read – The Myth of Multitasking
Today I finished reading “The Myth of Multitasking: How “Doing It All” Gets Nothing Done” by Dave Crenshaw
Paper – User-driven applications
Today I read a paper titled “User-driven applications”
The abstract is:
User-driven applications are the programs, in which the full control is given to the users.
Designers of such programs are responsible only for developing an instrument for solving some task, but they do not enforce users to work with this instrument according with the predefined scenario.
Users’ control of the applications means that only users decide at any moment WHAT, WHEN, and HOW must appear on the screen.
Such applications can be constructed only on the basis of moveable / resizable elements.
Programs, based on such elements, have very interesting features and open absolutely new possibilities.
This article describes the design of the user-driven applications and shows the consequences of switching to such type of programs on the samples from different areas.
Paper – Study of Self-Organization Model of Multiple Mobile Robot
Today I read a paper titled “Study of Self-Organization Model of Multiple Mobile Robot”
The abstract is:
A good organization model of multiple mobile robot should be able to improve the efficiency of the system, reduce the complication of robot interactions, and detract the difficulty of computation.
From the sociology aspect of topology, structure and organization, this paper studies the multiple mobile robot organization formation and running mechanism in the dynamic, complicated and unknown environment.
It presents and describes in detail a Hierarchical- Web Recursive Organization Model (HWROM) and forming algorithm.
It defines the robot society leader; robotic team leader and individual robot as the same structure by the united framework and describes the organization model by the recursive structure.
The model uses task-oriented and top-down method to dynamically build and maintain structures and organization.
It uses market-based techniques to assign task, form teams and allocate resources in dynamic environment.
The model holds several characteristics of self-organization, dynamic, conciseness, commonness and robustness.
Read – Try Rebooting Yourself
Today I finished reading “Try Rebooting Yourself” by Scott Adams
Paper – Image Retrieval Techniques based on Image Features, A State of Art approach for CBIR
Today I read a paper titled “Image Retrieval Techniques based on Image Features, A State of Art approach for CBIR”
The abstract is:
The purpose of this Paper is to describe our research on different feature extraction and matching techniques in designing a Content Based Image Retrieval (CBIR) system.
Due to the enormous increase in image database sizes, as well as its vast deployment in various applications, the need for CBIR development arose.
Firstly, this paper outlines a description of the primitive feature extraction techniques like, texture, colour, and shape.
Once these features are extracted and used as the basis for a similarity check between images, the various matching techniques are discussed.
Furthermore, the results of its performance are illustrated by a detailed example.
Listening – Maya
This week I am listening to “Maya” by M.I.A. (UK)
Paper – Object-Oriented Program Comprehension: Effect of Expertise, Task and Phase
Today I read a paper titled “Object-Oriented Program Comprehension: Effect of Expertise, Task and Phase”
The abstract is:
The goal of our study is to evaluate the effect on program comprehension of three factors that have not previously been studied in a single experiment.
These factors are programmer expertise (expert vs.
novice), programming task (documentation vs.
reuse), and the development of understanding over time (phase 1 vs.
phase 2).
This study is carried out in the context of the mental model approach to comprehension based on van Dijk and Kintsch’s model (1983).
One key aspect of this model is the distinction between two kinds of representation the reader might construct from a text: 1) the textbase, which refers to what is said in the text and how it is said, and 2) the situation model, which represents the situation referred to by the text.
We have evaluated the effect of the three factors mentioned above on the development of both the textbase (or program model) and the situation model in object-oriented program comprehension.
We found a four-way interaction of expertise, phase, task and type of model.
For the documentation group we found that experts and novices differ in the elaboration of their situation model but not their program model.
There was no interaction of expertise with phase and type of model in the documentation group.
For the reuse group, there was a three-way interaction between phase, expertise and type of model.
For the novice reuse group, the effect of the phase was to increase the construction of the situation model but not the program model.
With respect to the task, our results show that novices do not spontaneously construct a strong situation model but are able to do so if the task demands it.
Paper – Detecting and Tracking the Spread of Astroturf Memes in Microblog Streams
Today I read a paper titled “Detecting and Tracking the Spread of Astroturf Memes in Microblog Streams”
The abstract is:
Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information.
In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought.
We introduce an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events.
We describe a Web service that leverages this framework to track political memes in Twitter and help detect astroturfing, smear campaigns, and other misinformation in the context of U.S.
political elections.
We present some cases of abusive behaviors uncovered by our service.
Finally, we discuss promising preliminary results on the detection of suspicious memes via supervised learning based on features extracted from the topology of the diffusion networks, sentiment analysis, and crowdsourced annotations.
Paper – Is the crowd’s wisdom biased? A quantitative asessment of three online communities
Today I read a paper titled “Is the crowd’s wisdom biased? A quantitative asessment of three online communities”
The abstract is:
This paper presents a study of user voting on three websites: Imdb, Amazon and BookCrossings.
It reports on an expert evaluation of the voting mechanisms of each website and a quantitative data analysis of users’ aggregate voting behavior.
The results suggest that voting follows different patterns across the websites, with higher barrier to vote introducing a more of one-off voters and attracting mostly experts.
The results also show that that one-off voters tend to vote on popular items, while experts mostly vote for obscure, low-rated items.
The study concludes with design suggestions to address the “wisdom of the crowd” bias.
Paper – On weakly optimal partitions in modular networks
Today I read a paper titled “On weakly optimal partitions in modular networks”
The abstract is:
Modularity was introduced as a measure of goodness for the community structure induced by a partition of the set of vertices in a graph.
Then, it also became an objective function used to find good partitions, with high success.
Nevertheless, some works have shown a scaling limit and certain instabilities when finding communities with this criterion.
Modularity has been studied proposing several formalisms, as hamiltonians in a Potts model or laplacians in spectral partitioning.
In this paper we present a new probabilistic formalism to analyze modularity, and from it we derive an algorithm based on weakly optimal partitions.
This algorithm obtains good quality partitions and also scales to large graphs.
Listening – Have One On Me
This week I am listening to “Have One On Me” by Joanna Newsom
Paper – The Good, the Bad, and the Ugly: three different approaches to break their watermarking system
Today I read a paper titled “The Good, the Bad, and the Ugly: three different approaches to break their watermarking system”
The abstract is:
The Good is Blondie, a wandering gunman with a strong personal sense of honor.
The Bad is Angel Eyes, a sadistic hitman who always hits his mark.
The Ugly is Tuco, a Mexican bandit who’s always only looking out for himself.
Against the backdrop of the BOWS contest, they search for a watermark in gold buried in three images.
Each knows only a portion of the gold’s exact location, so for the moment they’re dependent on each other.
However, none are particularly inclined to share…
Studying – Advanced content marketing
This month I am studying “Advanced content marketing”
I want to boost my personal marketing outreach so I am setting a goal of at least three months focus on acquiring more marketing skills.
My first class will be in content marketing run by a friend in Santa Barbara. I build him a sales website, he gives me access to his six month online content marketing course ware for free.
Update: That was a lot of fun. Lots of good video, and got some great feedback on my submitted work. I can immediately start applying that new content marketing knowledge going forward with my own marketing efforts.
Logged 25 hours of class time (video tutorials and interactive Skype sessions) and extra exercises.
Listening – Brothers
This week I am listening to “Brothers” by The Black Keys
Read – The Entrepreneur’s Guide to Customer Development
Today I finished reading “The Entrepreneur’s Guide to Customer Development: A Cheat Sheet to the Four Steps to the Epiphany” by Brant Cooper
Just because I did it for free doesn’t mean you can have it for free
I did some contract work for a “gentleman” many years ago who was having difficulty getting the performance he needed out of a homegrown, poorly implemented 3D rendering engine created by developers who had never built a 3D rendering engine before.
I had developed, as a separate personal project, a 3D rendering engine that would fit the bill, solve many of the problems we were suffering.
He wanted me to hand over the 3D rendering engine for free, with a perpetual, exclusive license.
He pleaded, he threatened, he cajoled, he tried to reason endlessly with me that I should give him the source code “because I wasn’t using it.”
He attempted to reason that because I was not using it for anything at the time, that I had no right to attempt to charge him for all of the hours I had put in to the work beforehand.
Paper – High Speed and Area Efficient 2D DWT Processor based Image Compression” Signal & Image Processing
Today I read a paper titled “High Speed and Area Efficient 2D DWT Processor based Image Compression” Signal & Image Processing”
The abstract is:
This paper presents a high speed and area efficient DWT processor based design for Image Compression applications.
In this proposed design, pipelined partially serial architecture has been used to enhance the speed along with optimal utilization and resources available on target FPGA.
The proposed model has been designed and simulated using Simulink and System Generator blocks, synthesized with Xilinx Synthesis tool (XST) and implemented on Spartan 2 and 3 based XC2S100-5tq144 and XC3S500E-4fg320 target device.
The results show that proposed design can operate at maximum frequency 231 MHz in case of Spartan 3 by consuming power of 117mW at 28 degree/c junction temperature.
The result comparison has shown an improvement of 15% in speed.
Listening – The Drums
This week I am listening to “The Drums” by The Drums
Paper – Multimedia Applications of Multiprocessor Systems-on-Chips
Today I read a paper titled “Multimedia Applications of Multiprocessor Systems-on-Chips”
The abstract is:
This paper surveys the characteristics of multimedia systems.
Multimedia applications today are dominated by compression and decompression, but multimedia devices must also implement many other functions such as security and file management.
We introduce some basic concepts of multimedia algorithms and the larger set of functions that multimedia systems-on-chips must implement.
Hulu needs my what now?
I don’t really watch much in the way of television these days. I’m more of an accidental watcher, encountering shows when they are on the TV whilst I am visiting friends. I’ll watch an occasional movie, but most of the stuff that Hollywood puts out bores me to death. I’m not a film snob, I couldn’t tell you what something means, or why a director picked a particular location or technique in his attempt to convey a certain message so it is not like I even watch independent movies specifically. I just find that life is too short to passively sit there and watch someone else’s entertainment product, there’s too many things I want to be doing to sit in front of a screen and not interact with it.
But hey, I’m trying to keep up on modern technology and various internet services, and sometimes I feel like catching up on something such as Good Eats or Mythbusters. I thought Hulu would be an ideal service to sign up for and get a semi-regular fix, they also offer a free 7-day trial so I can check it out for a week and cancel if I don’t like it or am not using it.
Off I trundled to the Hulu website, all ready to sign up, credit card in hand because that’s how these 7-day free trials often work. The registration form wants the standard stuff, email address, name. Okay, I can supply a fake name and made-up email address that will work for the purposes of this that can send all of their spam and marketing in to a black hole but still work for when I need to recover a password. But now it wants First Name and Last Name, hmm, okay, nothing too out of the ordinary there, and then it wants my birth date… um… why?
So the software can screen only age appropriate content?
No, probably not as anybody else could use my account or watch over my shoulder.
The web page goes on to require my zip code, not for billing purposes, I’m not even at the billing screen yet, this is purely for marketing purposes.
And finally, hey, Hulu wants to know gender too.
Amazing! What next year? Websites will need to know blood type or income level before letting you watch TV?
What does my gender or zip code or name have to do with signing up for a subscription based TV service? Oh, that’s right, so you can advertise and market to me under the guise of ensuring I receive “relevant” content. Gosh, they even have a message “we promise to always keep this information confidential” so that makes me instantly want to trust them more because if it’s on a company’s website, the company must obviously stand by everything they say they’ll do. Except for when it inconveniences them and makes it difficult to make money off of your marketing data.
So yeah, supplying lots of data to be able to watch TV which I don’t much care for anyway? No, I don’t think so. Nobody needs to know any more about the person signing up for a service than the billing address. If information requirements go beyond that, it is highly suspicious as to why a company is collecting the data.
It comes back to who gains the benefit when this data is supplied, me or the company? And in pretty much every case, it’s the company. If there is no direct, tangible benefit to the end-user, there is no reason to collect the data.
Read – Rich Dad’s Before You Quit Your Job
Today I finished reading “Rich Dad’s Before You Quit Your Job: 10 Real-Life Lessons Every Entrepreneur Should Know About Building a Multimillion-Dollar Business” by Robert T. Kiyosaki
Listening – Congratulations
This week I am listening to “Congratulations” by MGMT
Paper – Warping Peirce Quincuncial Panoramas
Today I read a paper titled “Warping Peirce Quincuncial Panoramas”
The abstract is:
The Peirce quincuncial projection is a mapping of the surface of a sphere to the interior of a square.
It is a conformal map except for four points on the equator.
These points of non-conformality cause significant artifacts in photographic applications.
In this paper, we propose an algorithm and user-interface to mitigate these artifacts.
Moreover, in order to facilitate an interactive user-interface, we present a fast algorithm for calculating the Peirce quincuncial projection of spherical imagery.
We then promote the Peirce quincuncial projection as a viable alternative to the more popular stereographic projection in some scenarios.
Paper – Virtual Reality
Today I read a paper titled “Virtual Reality”
The abstract is:
This paper is focused on the presentation of Virtual Reality principles together with the main implementation methods and techniques.
An overview of the main development directions is included.
Paper – Semantic Modeling and Retrieval of Dance Video Annotations
Today I read a paper titled “Semantic Modeling and Retrieval of Dance Video Annotations”
The abstract is:
Dance video is one of the important types of narrative videos with semantic rich content.
This paper proposes a new meta model, Dance Video Content Model (DVCM) to represent the expressive semantics of the dance videos at multiple granularity levels.
The DVCM is designed based on the concepts such as video, shot, segment, event and object, which are the components of MPEG-7 MDS.
This paper introduces a new relationship type called Temporal Semantic Relationship to infer the semantic relationships between the dance video objects.
Inverted file based index is created to reduce the search time of the dance queries.
The effectiveness of containment queries using precision and recall is depicted.
Keywords: Dance Video Annotations, Effectiveness Metrics, Metamodeling, Temporal Semantic Relationships.
Paper – Toward a general theory of quantum games
Today I read a paper titled “Toward a general theory of quantum games”
The abstract is:
We study properties of quantum strategies, which are complete specifications of a given party’s actions in any multiple-round interaction involving the exchange of quantum information with one or more other parties.
In particular, we focus on a representation of quantum strategies that generalizes the Choi-Jamio{\l}kowski representation of quantum operations.
This new representation associates with each strategy a positive semidefinite operator acting only on the tensor product of its input and output spaces.
Various facts about such representations are established, and two applications are discussed: the first is a new and conceptually simple proof of Kitaev’s lower bound for strong coin-flipping, and the second is a proof of the exact characterization QRG = EXP of the class of problems having quantum refereed games.
Read – The Grand Design
Today I finished reading “The Grand Design” by Stephen Hawking
Read – The Greatest Show on Earth
Today I finished reading “The Greatest Show on Earth: The Evidence for Evolution” by Richard Dawkins
Paper – Bottom-Up Earley Deduction
Today I read a paper titled “Bottom-Up Earley Deduction”
The abstract is:
We propose a bottom-up variant of Earley deduction.
Bottom-up deduction is preferable to top-down deduction because it allows incremental processing (even for head-driven grammars), it is data-driven, no subsumption check is needed, and preference values attached to lexical items can be used to guide best-first search.
We discuss the scanning step for bottom-up Earley deduction and indexing schemes that help avoid useless deduction steps..
Listening – Broken Bells
This week I am listening to “Broken Bells” by Broken Bells
Read – The Best of Larry Niven
Today I finished reading “The Best of Larry Niven” by Larry Niven
Paper – Pattern Recognition System Design with Linear Encoding for Discrete Patterns
Today I read a paper titled “Pattern Recognition System Design with Linear Encoding for Discrete Patterns”
The abstract is:
In this paper, designs and analyses of compressive recognition systems are discussed, and also a method of establishing a dual connection between designs of good communication codes and designs of recognition systems is presented.
Pattern recognition systems based on compressed patterns and compressed sensor measurements can be designed using low-density matrices.
We examine truncation encoding where a subset of the patterns and measurements are stored perfectly while the rest is discarded.
We also examine the use of LDPC parity check matrices for compressing measurements and patterns.
We show how more general ensembles of good linear codes can be used as the basis for pattern recognition system design, yielding system design strategies for more general noise models.
Paper – Optimizing Web Sites for Customer Retention
Today I read a paper titled “Optimizing Web Sites for Customer Retention”
The abstract is:
With customer relationship management (CRM) companies move away from a mainly product-centered view to a customer-centered view.
Resulting from this change, the effective management of how to keep contact with customers throughout different channels is one of the key success factors in today’s business world.
Company Web sites have evolved in many industries into an extremely important channel through which customers can be attracted and retained.
To analyze and optimize this channel, accurate models of how customers browse through the Web site and what information within the site they repeatedly view are crucial.
Typically, data mining techniques are used for this purpose.
However, there already exist numerous models developed in marketing research for traditional channels which could also prove valuable to understanding this new channel.
In this paper we propose the application of an extension of the Logarithmic Series Distribution (LSD) model repeat-usage of Web-based information and thus to analyze and optimize a Web Site’s capability to support one goal of CRM, to retain customers.
As an example, we use the university’s blended learning web portal with over a thousand learning resources to demonstrate how the model can be used to evaluate and improve the Web site’s effectiveness.
Studying – Building cross-platform games
This month I am studying “Building cross-platform games”
Have been building cross-platform games for both console, desktop and mobile for decades. Not expecting to learn much on this course, but it will be nice to relax and just follow along and do the class work and class projects with everyone else.
Update: The three day in-person class (six hours per day even though the class starts at 9AM and ends at 5PM you cannot really count lunch and the coffee breaks) I got 19 hours of class time and one-on-one time with the instructor.
Listening – Swim
This week I am listening to “Swim” by Caribou
Paper – An analysis of a random algorithm for estimating all the matchings
Today I read a paper titled “An analysis of a random algorithm for estimating all the matchings”
The abstract is:
Counting the number of all the matchings on a bipartite graph has been transformed into calculating the permanent of a matrix obtained from the extended bipartite graph by Yan Huo, and Rasmussen presents a simple approach (RM) to approximate the permanent, which just yields a critical ratio O($n\omega(n)$) for almost all the 0-1 matrices, provided it’s a simple promising practical way to compute this #P-complete problem.
In this paper, the performance of this method will be shown when it’s applied to compute all the matchings based on that transformation.
The critical ratio will be proved to be very large with a certain probability, owning an increasing factor larger than any polynomial of $n$ even in the sense for almost all the 0-1 matrices.
Hence, RM fails to work well when counting all the matchings via computing the permanent of the matrix.
In other words, we must carefully utilize the known methods of estimating the permanent to count all the matchings through that transformation.