Today I read a paper titled “Theory and practice”

The abstract is:

*The treatment is informal but essentially rigorous..*

Somebody needs to think about this stuff...

This is a listing of the science papers that I have read over the years.

Some I have read as part of my work.

Many I've read just because the subject was of interest to me.

And others I read because the title was intriguing.

I keep an up-to-date spreadsheet of the papers I've read, and the date that I have read them. I am currently just below 2,000 papers total.

Which, from my understanding, is "not a lot."

Most of the science papers are to do with artificial intelligence, virtual reality, augmented reality, computer graphics, robotics, haptics, vision recognition, user interfaces, user experience, analytics and some reading on game theory.

I'm not a fast reader so it takes me an hour or more to do the first read through of a paper, and I often return to it days or weeks later after having thought about the material some more.

by justin

Today I read a paper titled “Theory and practice”

The abstract is:

*The treatment is informal but essentially rigorous..*

by justin

Today I read a paper titled “Statistical Feature Combination for the Evaluation of Game Positions”

The abstract is:

* This article describes an application of three well-known statistical methods in the field of game-tree search: using a large number of classified Othello positions, feature weights for evaluation functions with a game-phase-independent meaning are estimated by means of logistic regression, Fisher’s linear discriminant, and the quadratic discriminant function for normally distributed features.*

*Thereafter, the playing strengths are compared by means of tournaments between the resulting versions of a world-class Othello program.*

*In this application, logistic regression – which is used here for the first time in the context of game playing – leads to better results than the other approaches.*

by justin

Today I read a paper titled “An Integrated Framework for Learning and Reasoning”

The abstract is:

* Learning and reasoning are both aspects of what is considered to be intelligence.*

*Their studies within AI have been separated historically, learning being the topic of machine learning and neural networks, and reasoning falling under classical (or symbolic) AI.*

*However, learning and reasoning are in many ways interdependent.*

*This paper discusses the nature of some of these interdependencies and proposes a general framework called FLARE, that combines inductive learning using prior knowledge together with reasoning in a propositional setting.*

*Several examples that test the framework are presented, including classical induction, many important reasoning protocols and two simple expert systems.*

by justin

Today I read a published paper titled “Logic Programming, Functional Programming, and Inductive Definitions”

And now I get to learn a new language called Haskell. And a new concept called “functional programming.” Though I am not too sure how applicable a piece of code is that has “no side effects.” I am not even sure what class of problems I can solve with Haskell. It will be an interesting adventure.

The abstract is:

*An attempt at unifying logic and functional programming is reported. As a starting point, we take the view that “logic programs” are not about logic but constitute inductive definitions of sets and relations. A skeletal language design based on these considerations is sketched and a prototype implementation discussed.*

by justin

Today I read a published paper titled “A note on digitized angles”

Knowledge that is directly applicable to computer graphics. Though it is not something I am go to actually be able to solve, the fact that I am aware of the problem with drawing a line that won’t line up with an arrowhead on a rasterised display is important. It also explains why some pixels crawl and flash when the line is rotating. Some of the math is a little above my head, but I got most of it.

The abstract is:

*We study the configurations of pixels that occur when two digitized straight lines meet each other.*

by justin

Today I read a published paper titled “A Higher-Order Implementation of Rewriting”

I am really trying to wrap my head around theorem provers and theorem solvers and automatic discovery of theorems. My knowledge of programming is solid but I don’t understand the science. I don’t think I will ever understand the science.

The abstract is:

*Many automatic theorem-provers rely on rewriting. Using theorems as rewrite rules helps to simplify the subgoals that arise during a proof LCF is an interactive theorem-prover intended for reasoning about computation. Its implementation of rewriting is presented in detail. LCF provides a family of rewriting functions, and operators to combine them. A succession of functions is described, from pattern matching primitives to the rewriting tool that performs most inferences in LCF proofs. The design is highly modular. Each function performs a basic, specific task, such as recognizing a certain form of tautology. Each operator implements one method of building a rewriting function from simpler ones. These pieces can be put together in numerous ways, yielding a variety of rewrit- ing strategies. The approach involves programming with higher-order functions. Rewriting functions are data values, produced by computation on other rewriting functions. The code is in daily use at Cambridge, demonstrating the practical use of functional programming.*

by justin

Today I read a published paper titled “The Foundation of a Generic Theorem Prover”

Taking another crack at understanding proofs and how they can be automatically (interactively) solved. Still don’t get it. I feel so fucking dumb.

The abstract is:

*Isabelle is an interactive theorem prover that supports a variety of logics. It represents rules as propositions (not as functions) and builds proofs by combining rules. These operations constitute a meta-logic (or `logical framework’) in which the object-logics are formalized. Isabelle is now based on higher-order logic — a precise and well-understood foundation. Examples illustrate use of this meta-logic to formalize logics and proofs. Axioms for first-order logic are shown sound and complete. Backwards proof is formalized by meta-reasoning about object-level entailment. Higher-order logic has several practical advantages over other meta-logics. Many proof techniques are known, such as Huet’s higher-order unification procedure.*

by justin

Today I read a published paper titled “Natural Deduction as Higher-Order Resolution”

Dense. Maybe I am not cut out to understand programming and will always be a little computer hacker and nothing more.

The abstract is:

*An interactive theorem prover, Isabelle, is under development. In LCF, each inference rule is represented by one function for forwards proof and another (a tactic) for backwards proof. In Isabelle, each inference rule is represented by a Horn clause. Resolution gives both forwards and backwards proof, supporting a large class of logics. Isabelle has been used to prove theorems in. Quantifiers pose several difficulties: substitution, bound variables, Skolemization. Isabelle’s representation of logical syntax is the typed lambda-calculus, requiring higher- order unification. It may have potential for logic programming. Depth-first subgoaling along inference rules constitutes a higher-order Prolog.*

by justin

Today I read a published paper titled “Proving Termination of Normalization Functions for Conditional Expressions”

I didn’t understand this at all. This knowledge is way outside of my scope of understanding and need. I mean, I “get it” but also I don’t get it. I will not be discouraged.

The abstract is:

*Boyer and Moore have discussed a recursive function that puts conditional expressions into normal form [1]. It is difficult to prove that this function terminates on all inputs. Three termination proofs are compared: (1) using a measure function, (2) in domain theory using LCF, (3) showing that its recursion relation, defined by the pattern of recursive calls, is well-founded. The last two proofs are essentially the same though conducted in markedly different logical frameworks. An obviously total variant of the normalize function is presented as the `computational meaning’ of those two proofs. A related function makes nested recursive calls. The three termination proofs become more complex: termination and correctness must be proved simultaneously. The recursion relation approach seems flexible enough to handle subtle termination proofs where previously domain theory seemed essential.*

by justin

Today I read a published paper titled “Flocks, Herds, and Schools: A Distributed Behavioral Model.”

Love this paper. Implemented a couple of neat demos on the Atari ST. Wondering if boids have any application to any games though I cannot think of anything right now.

The abstract is:

*The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the natural world.*

*But this type of complex motion is rarely seen in computer animation.*

*This paper explores an approach based on simulation as an alternative to scripting the paths of each bird individually.*

*The simulated flock is an elaboration of a particle system, with the simulated birds being the particles.*

*The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course.*

*Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it by the “animator.”*

*The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds.*