Today I finished reading “The Portable MBA in Project Managment” by Eric Verzuh
Archives for 2005
Read – Eat That Frog!
Today I finished reading “Eat That Frog!: 21 Great Ways to Stop Procrastinating and Get More Done in Less Time” by Brian Tracy
Read – Introduction to Machine Learning
Today I finished reading “Introduction to Machine Learning” by Ethem Alpaydin
Listening – The Sound Of White
This week I am listening to “The Sound Of White” by Missy Higgins
Read – Stardust
Today I finished reading “Stardust” by Neil Gaiman
Read – Killer Game Programming in Java
Today I finished reading “Killer Game Programming in Java” by Andrew Davison
Listening – Back To Bedlam
This week I am listening to “Back To Bedlam” by James Blunt
Paper – A General, Sound and Efficient Natural Language Parsing Algorithm based on Syntactic Constraints Propagation
Today I read a paper titled “A General, Sound and Efficient Natural Language Parsing Algorithm based on Syntactic Constraints Propagation”
The abstract is:
This paper presents a new context-free parsing algorithm based on a bidirectional strictly horizontal strategy which incorporates strong top-down predictions (derivations and adjacencies).
From a functional point of view, the parser is able to propagate syntactic constraints reducing parsing ambiguity.
From a computational perspective, the algorithm includes different techniques aimed at the improvement of the manipulation and representation of the structures used.
Read – Words You Don’t Want to Hear During Your Annual Performance Review
Today I finished reading “Words You Don’t Want to Hear During Your Annual Performance Review” by Scott Adams
Studying – Agile project management
This month I am studying “Agile project management”
Most project management courses I’ve taken deal with either non-technical, non-software projects or focus mostly on classic waterfall style of project management.
This time around I get to fill in the gaps in my knowledge about the specifics of agile project management.
Listening – Ashes Of The Wake
This week I am listening to “Ashes Of The Wake” by Lamb Of God
Paper – Exposing Software Defined Radio Functionality To Native Operating System Applications via Virtual Devices
Today I read a paper titled “Exposing Software Defined Radio Functionality To Native Operating System Applications via Virtual Devices”
The abstract is:
Many reconfigurable platforms require that applications be written specifically to take advantage of the reconfigurable hardware.
In a PC-based environment, this presents an undesirable constraint in that the many already available applications cannot leverage on such hardware.
Greatest benefit can only be derived from reconfigurable devices if even native OS applications can transparently utilize reconfigurable devices as they would normal full-fledged hardware devices.
This paper presents how Proteus Virtual Devices are used to expose reconfigurable hardware in a transparent manner for use by typical native OS applications.
Paper – Efficient Tree Layout in a Multilevel Memory Hierarchy
Today I read a paper titled “Efficient Tree Layout in a Multilevel Memory Hierarchy”
The abstract is:
We consider the problem of laying out a tree with fixed parent/child structure in hierarchical memory.
The goal is to minimize the expected number of block transfers performed during a search along a root-to-leaf path, subject to a given probability distribution on the leaves.
This problem was previously considered by Gil and Itai, who developed optimal but slow algorithms when the block-transfer size B is known.
We present faster but approximate algorithms for the same problem; the fastest such algorithm runs in linear time and produces a solution that is within an additive constant of optimal.
In addition, we show how to extend any approximately optimal algorithm to the cache-oblivious setting in which the block-transfer size is unknown to the algorithm.
The query performance of the cache-oblivious layout is within a constant factor of the query performance of the optimal known-block-size layout.
Computing the cache-oblivious layout requires only logarithmically many calls to the layout algorithm for known block size; in particular, the cache-oblivious layout can be computed in O(N lg N) time, where N is the number of nodes.
Finally, we analyze two greedy strategies, and show that they have a performance ratio between Omega(lg B / lg lg B) and O(lg B) when compared to the optimal layout.
Listening – Futures
This week I am listening to “Futures” by Jimmy Eat World
Read – Once More* *with footnotes
Today I finished reading “Once More* *with footnotes” by Terry Pratchett
Read – The Man in the Iron Mask
Today I finished reading “The Man in the Iron Mask” by Alexandre Dumas
Paper – Business Intelligence from Web Usage Mining
Today I read a paper titled “Business Intelligence from Web Usage Mining”
The abstract is:
The rapid e-commerce growth has made both business community and customers face a new situation.
Due to intense competition on one hand and the customer’s option to choose from several alternatives business community has realized the necessity of intelligent marketing strategies and relationship management.
Web usage mining attempts to discover useful knowledge from the secondary data obtained from the interactions of the users with the Web.
Web usage mining has become very critical for effective Web site management, creating adaptive Web sites, business and support services, personalization, network traffic flow analysis and so on.
In this paper, we present the important concepts of Web usage mining and its various practical applications.
We further present a novel approach ‘intelligent-miner’ (i-Miner) to optimize the concurrent architecture of a fuzzy clustering algorithm (to discover web data clusters) and a fuzzy inference system to analyze the Web site visitor trends.
A hybrid evolutionary fuzzy clustering algorithm is proposed in this paper to optimally segregate similar user interests.
The clustered data is then used to analyze the trends using a Takagi-Sugeno fuzzy inference system learned using a combination of evolutionary algorithm and neural network learning.
Proposed approach is compared with self-organizing maps (to discover patterns) and several function approximation techniques like neural networks, linear genetic programming and Takagi-Sugeno fuzzy inference system (to analyze the clusters).
The results are graphically illustrated and the practical significance is discussed in detail.
Empirical results clearly show that the proposed Web usage-mining framework is efficient.
Paper – How many candidates are needed to make elections hard to manipulate?
Today I read a paper titled “How many candidates are needed to make elections hard to manipulate?”
The abstract is:
In multiagent settings where the agents have different preferences, preference aggregation is a central issue.
Voting is a general method for preference aggregation, but seminal results have shown that all general voting protocols are manipulable.
One could try to avoid manipulation by using voting protocols where determining a beneficial manipulation is hard computationally.
The complexity of manipulating realistic elections where the number of candidates is a small constant was recently studied (Conitzer 2002), but the emphasis was on the question of whether or not a protocol becomes hard to manipulate for some constant number of candidates.
That work, in many cases, left open the question: How many candidates are needed to make elections hard to manipulate? This is a crucial question when comparing the relative manipulability of different voting protocols.
In this paper we answer that question for the voting protocols of the earlier study: plurality, Borda, STV, Copeland, maximin, regular cup, and randomized cup.
We also answer that question for two voting protocols for which no results on the complexity of manipulation have been derived before: veto and plurality with runoff.
It turns out that the voting protocols under study become hard to manipulate at 3 candidates, 4 candidates, 7 candidates, or never.
Read – AI for Game Developers
Today I finished reading “AI for Game Developers” by David M. Bourg
Listening – Eye To The Telescope
This week I am listening to “Eye To The Telescope” by KT Tunstall
Read – The Catcher in the Rye
Today I finished reading “The Catcher in the Rye” by J.D. Salinger
Read – Pensees
Today I finished reading “Pensees” by Blaise Pascal
Listening – Talkie Walkie
This week I am listening to “Talkie Walkie” by Air
Bad games
In video games, just like in all other products of human toil, 90% of everything is crap.
90% of the games today are crap.
90% of the games in the 1980’s were crap.
The reason the games in the 1980s were so much better is that there were so many games being made (they were cheaper and quicker but no means easier) and therefore, there were more good games being made than bad games being made.
But 90% of them were still crap.
Studying – Project management
This month I am studying “Project management”
Second month where I go through month’s 5 and 6 of the course ware. Not anticipating any problems. Read a few books. Read a few studies. Take a few multiple choice tests.
Update: All done. Easy enough course. Glad I took it though. Hadn’t realised I’d forgotten some of the fundamentals over the years.
Listening – Chuck
This week I am listening to “Chuck” by Sum 41
Read – C++ Coding Standards
Today I finished reading “C++ Coding Standards: 101 Rules, Guidelines, and Best Practices” by Herb Sutter
Cringe induction
Any creative output you have has a similar trait.
Your programming, your writing, your art, your opinions on life and your old social media status updates.
Your earliest work is nothing short of cringe worthy that you will find intently embarrassing when looked at through the lens of time.
Fortunately for me, the last time I visited my parent’s house I destroyed all the floppy discs with my angst ridden creative writing so that it would never haunt me in my future career.
Listening – Uh Huh Her
This week I am listening to “Uh Huh Her” by P.J. Harvey
Paper – Benchmarking and Implementation of Probability-Based Simulations on Programmable Graphics Cards
Today I read a paper titled “Benchmarking and Implementation of Probability-Based Simulations on Programmable Graphics Cards”
The abstract is:
The latest Graphics Processing Units (GPUs) are reported to reach up to 200 billion floating point operations per second (200 Gflops) and to have price performance of 0.1 cents per M flop.
These facts raise great interest in the plausibility of extending the GPUs’ use to non-graphics applications, in particular numerical simulations on structured grids (lattice).
We review previous work on using GPUs for non-graphics applications, implement probability-based simulations on the GPU, namely the Ising and percolation models, implement vector operation benchmarks for the GPU, and finally compare the CPU’s and GPU’s performance.
A general conclusion from the results obtained is that moving computations from the CPU to the GPU is feasible, yielding good time and price performance, for certain lattice computations.
Preliminary results also show that it is feasible to use them in parallel .
Listening – Bows + Arrows
This week I am listening to “Bows + Arrows” by The Walkmen
Listening – Madvillainy
This week I am listening to “Madvillainy” by Madvillain
Read – The Richest Man in Babylon
Today I finished reading “The Richest Man in Babylon” by George S. Clason
Read – Pre-Algebra Demystified
Today I finished reading “Pre-Algebra Demystified” by Allan Bluman
Studying – Project management
This month I am studying “Project management”
Taking a refresher course in project management. Six months of work, including reading material and coursework. Thankfully I don’t have to participate in “class discussions” so I won’t waste my time there.
Update: Got through the first four months of material in the first month. Handed in my essays, did the two tests, read the required books. Ho hum. Should be able to wrap it up easily next month.
Listening – Soviet Kitsch
This week I am listening to “Soviet Kitsch” by Regina Spektor
Paper – Surface Triangulation — The Metric Approach
Today I read a paper titled “Surface Triangulation — The Metric Approach”
The abstract is:
We embark in a program of studying the problem of better approximating surfaces by triangulations(triangular meshes) by considering the approximating triangulations as finite metric spaces and the target smooth surface as their Haussdorff-Gromov limit.
This allows us to define in a more natural way the relevant elements, constants and invariants s.a.
principal directions and principal values, Gaussian and Mean curvature, etc.
By a “natural way” we mean an intrinsic, discrete, metric definitions as opposed to approximating or paraphrasing the differentiable notions.
In this way we hope to circumvent computational errors and, indeed, conceptual ones, that are often inherent to the classical, “numerical” approach.
In this first study we consider the problem of determining the Gaussian curvature of a polyhedral surface, by using the {\em embedding curvature} in the sense of Wald (and Menger).
We present two modalities of employing these definitions for the computation of Gaussian curvature.
Read – AI Game Engine Programming
Today I finished reading “AI Game Engine Programming” by Brian Schwab
Read – Going Postal
Today I finished reading “Going Postal” by Terry Pratchett
Listening – Greatest Hits
This week I am listening to “Greatest Hits” by Neil Young
Read – Game Programming Golden Rules
Today I finished reading “Game Programming Golden Rules” by Martin Brownlow
Read – Electronics Demystified
Today I finished reading “Electronics Demystified” by Stan Gibilisco
Listening – Funeral
This week I am listening to “Funeral” by Arcade Fire
Read – Game Programming Gems 4
Today I finished reading “Game Programming Gems 4” by Andrew Kirmse
Paper – Classes of service under perfect competition and technological change: a model for the dynamics of the Internet?
Today I read a paper titled “Classes of service under perfect competition and technological change: a model for the dynamics of the Internet?”
The abstract is:
Certain services may be provided in a continuous, one-dimensional, ordered range of different qualities and a customer requiring a service of quality q can only be offered a quality superior or equal to q.
Only a discrete set of different qualities will be offered, and a service provider will provide the same service (of fixed quality b) to all customers requesting qualities of service inferior or equal to b.
Assuming all services (of quality b) are priced identically, a monopolist will choose the qualities of service and the prices that maximize profit but, under perfect competition, a service provider will choose the (inferior) quality of service that can be priced at the lowest price.
Assuming significant economies of scale, two fundamentally different regimes are possible: either a number of different classes of service are offered (DC regime), or a unique class of service offers an unbounded quality of service (UC regime).
The DC regime appears in one of two sub-regimes: one, BDC, in which a finite number of classes is offered, the qualities of service offered are bounded and requests for high-quality services are not met, or UDC in which an infinite number of classes of service are offered and every request is met.
The types of the demand curve and of the economies of scale, not the pace of technological change, determine the regime and the class boundaries.
The price structure in the DC regime obeys very general laws..
Read – A Brief History of Everything
Today I finished reading “A Brief History of Everything” by Ken Wilber
Listening – Love. Angel. Music. Baby.
This week I am listening to “Love. Angel. Music. Baby.” by Gwen Stefani
Paper – Least squares fitting of circles and lines
Today I read a paper titled “Least squares fitting of circles and lines”
The abstract is:
We study theoretical and computational aspects of the least squares fit (LSF) of circles and circular arcs.
First we discuss the existence and uniqueness of LSF and various parametrization schemes.
Then we evaluate several popular circle fitting algorithms and propose a new one that surpasses the existing methods in reliability.
We also discuss and compare direct (algebraic) circle fits.
Read – Oliver Twist
Today I finished reading “Oliver Twist” by Charles Dickens
Paper – An effective Procedure for Speeding up Algorithms
Today I read a paper titled “An effective Procedure for Speeding up Algorithms”
The abstract is:
The provably asymptotically fastest algorithm within a factor of 5 for formally described problems will be constructed.
The main idea is to enumerate all programs provably equivalent to the original problem by enumerating all proofs.
The algorithm could be interpreted as a generalization and improvement of Levin search, which is, within a multiplicative constant, the fastest algorithm for inverting functions.
Blum’s speed-up theorem is avoided by taking into account only programs for which a correctness proof exists.
Furthermore, it is shown that the fastest program that computes a certain function is also one of the shortest programs provably computing this function.
To quantify this statement, the definition of Kolmogorov complexity is extended, and two new natural measures for the complexity of a function are defined.