Today I read a paper titled “Philosophy in the Face of Artificial Intelligence”
The abstract is:
In this article, I discuss how the AI community views concerns about the emergence of superintelligent AI and related philosophical issues.
Somebody needs to think about this stuff...
by justin
Today I read a paper titled “Philosophy in the Face of Artificial Intelligence”
The abstract is:
In this article, I discuss how the AI community views concerns about the emergence of superintelligent AI and related philosophical issues.
by justin
Today I read a paper titled “Bandit-Based Random Mutation Hill-Climbing”
The abstract is:
The Random Mutation Hill-Climbing algorithm is a direct search technique mostly used in discrete domains.
It repeats the process of randomly selecting a neighbour of a best-so-far solution and accepts the neighbour if it is better than or equal to it.
In this work, we propose to use a novel method to select the neighbour solution using a set of independent multi- armed bandit-style selection units which results in a bandit-based Random Mutation Hill-Climbing algorithm.
The new algorithm significantly outperforms Random Mutation Hill-Climbing in both OneMax (in noise-free and noisy cases) and Royal Road problems (in the noise-free case).
The algorithm shows particular promise for discrete optimisation problems where each fitness evaluation is expensive.
by justin
NFTs of images are the equivalent of two children on the playground shouting “You can’t say my words back to me, I copyrighted them!” and the other kid screaming “Yeah? Well I trademarked them!”
by justin
Thinking about robbing a computer store and stealing a GPU as it will be cheaper to cover bail than pay a scalper.
by justin
Today I read a paper titled “Characterization of a Multi-User Indoor Positioning System Based on Low Cost Depth Vision (Kinect) for Monitoring Human Activity in a Smart Home”
The abstract is:
An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security.
Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle.
This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect).
A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems.
The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy.
This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community.
by justin
Today I read a paper titled “A Diagram Is Worth A Dozen Images”
The abstract is:
Diagrams are common tools for representing complex concepts, relationships and events, often when it would be difficult to portray the same information with natural images.
Understanding natural images has been extensively studied in computer vision, while diagram understanding has received little attention.
In this paper, we study the problem of diagram interpretation and reasoning, the challenging task of identifying the structure of a diagram and the semantics of its constituents and their relationships.
We introduce Diagram Parse Graphs (DPG) as our representation to model the structure of diagrams.
We define syntactic parsing of diagrams as learning to infer DPGs for diagrams and study semantic interpretation and reasoning of diagrams in the context of diagram question answering.
We devise an LSTM-based method for syntactic parsing of diagrams and introduce a DPG-based attention model for diagram question answering.
We compile a new dataset of diagrams with exhaustive annotations of constituents and relationships for over 5,000 diagrams and 15,000 questions and answers.
Our results show the significance of our models for syntactic parsing and question answering in diagrams using DPGs.
by justin
Today I read a paper titled “Enhanced Twitter Sentiment Classification Using Contextual Information”
The abstract is:
The rise in popularity and ubiquity of Twitter has made sentiment analysis of tweets an important and well-covered area of research.
However, the 140 character limit imposed on tweets makes it hard to use standard linguistic methods for sentiment classification.
On the other hand, what tweets lack in structure they make up with sheer volume and rich metadata.
This metadata includes geolocation, temporal and author information.
We hypothesize that sentiment is dependent on all these contextual factors.
Different locations, times and authors have different emotional valences.
In this paper, we explored this hypothesis by utilizing distant supervision to collect millions of labelled tweets from different locations, times and authors.
We used this data to analyse the variation of tweet sentiments across different authors, times and locations.
Once we explored and understood the relationship between these variables and sentiment, we used a Bayesian approach to combine these variables with more standard linguistic features such as n-grams to create a Twitter sentiment classifier.
This combined classifier outperforms the purely linguistic classifier, showing that integrating the rich contextual information available on Twitter into sentiment classification is a promising direction of research.
by justin
Today I read a paper titled “Gearbox Fault Detection through PSO Exact Wavelet Analysis and SVM Classifier”
The abstract is:
Time-frequency methods for vibration-based gearbox faults detection have been considered the most efficient method.
Among these methods, continuous wavelet transform (CWT) as one of the best time-frequency method has been used for both stationary and transitory signals.
Some deficiencies of CWT are problem of overlapping and distortion ofsignals.
In this condition, a large amount of redundant information exists so that it may cause false alarm or misinterpretation of the operator.
In this paper a modified method called Exact Wavelet Analysis is used to minimize the effects of overlapping and distortion in case of gearbox faults.
To implement exact wavelet analysis, Particle Swarm Optimization (PSO) algorithm has been used for this purpose.
This method have been implemented for the acceleration signals from 2D acceleration sensor acquired by Advantech PCI-1710 card from a gearbox test setup in Amirkabir University of Technology.
Gearbox has been considered in both healthy and chipped tooth gears conditions.
Kernelized Support Vector Machine (SVM) with radial basis functions has used the extracted features from exact wavelet analysis for classification.
The efficiency of this classifier is then evaluated with the other signals acquired from the setup test.
The results show that in comparison of CWT, PSO Exact Wavelet Transform has better ability in feature extraction in price of more computational effort.
In addition, PSO exact wavelet has better speed comparing to Genetic Algorithm (GA) exact wavelet in condition of equal population because of factoring mutation and crossover in PSO algorithm.
SVM classifier with the extracted features in gearbox shows very good results and its ability has been proved.
by justin
Today I read a paper titled “Font Identification in Historical Documents Using Active Learning”
The abstract is:
Identifying the type of font (e.g., Roman, Blackletter) used in historical documents can help optical character recognition (OCR) systems produce more accurate text transcriptions.
Towards this end, we present an active-learning strategy that can significantly reduce the number of labeled samples needed to train a font classifier.
Our approach extracts image-based features that exploit geometric differences between fonts at the word level, and combines them into a bag-of-word representation for each page in a document.
We evaluate six sampling strategies based on uncertainty, dissimilarity and diversity criteria, and test them on a database containing over 3,000 historical documents with Blackletter, Roman and Mixed fonts.
Our results show that a combination of uncertainty and diversity achieves the highest predictive accuracy (89% of test cases correctly classified) while requiring only a small fraction of the data (17%) to be labeled.
We discuss the implications of this result for mass digitization projects of historical documents.
by justin
Today I read a paper titled “Expected Similarity Estimation for Large-Scale Batch and Streaming Anomaly Detection”
The abstract is:
We present a novel algorithm for anomaly detection on very large datasets and data streams.
The method, named EXPected Similarity Estimation (EXPoSE), is kernel-based and able to efficiently compute the similarity between new data points and the distribution of regular data.
The estimator is formulated as an inner product with a reproducing kernel Hilbert space embedding and makes no assumption about the type or shape of the underlying data distribution.
We show that offline (batch) learning with EXPoSE can be done in linear time and online (incremental) learning takes constant time per instance and model update.
Furthermore, EXPoSE can make predictions in constant time, while it requires only constant memory.
In addition, we propose different methodologies for concept drift adaptation on evolving data streams.
On several real datasets we demonstrate that our approach can compete with state of the art algorithms for anomaly detection while being an order of magnitude faster than most other approaches.
by justin
Today I read a paper titled “Model-driven Simulations for Deep Convolutional Neural Networks”
The abstract is:
The use of simulated virtual environments to train deep convolutional neural networks (CNN) is a currently active practice to reduce the (real)data-hungriness of the deep CNN models, especially in application domains in which large scale real data and/or groundtruth acquisition is difficult or laborious.
Recent approaches have attempted to harness the capabilities of existing video games, animated movies to provide training data with high precision groundtruth.
However, a stumbling block is in how one can certify generalization of the learned models and their usefulness in real world data sets.
This opens up fundamental questions such as: What is the role of photorealism of graphics simulations in training CNN models? Are the trained models valid in reality? What are possible ways to reduce the performance bias? In this work, we begin to address theses issues systematically in the context of urban semantic understanding with CNNs.
Towards this end, we (a) propose a simple probabilistic urban scene model, (b) develop a parametric rendering tool to synthesize the data with groundtruth, followed by (c) a systematic exploration of the impact of level-of-realism on the generality of the trained CNN model to real world; and domain adaptation concepts to minimize the performance bias.
by justin
Today I read a paper titled “The Singularity May Never Be Near”
The abstract is:
There is both much optimism and pessimism around artificial intelligence (AI) today.
The optimists are investing millions of dollars, and even in some cases billions of dollars into AI.
The pessimists, on the other hand, predict that AI will end many things: jobs, warfare, and even the human race.
Both the optimists and the pessimists often appeal to the idea of a technological singularity, a point in time where machine intelligence starts to run away, and a new, more intelligent species starts to inhabit the earth.
If the optimists are right, this will be a moment that fundamentally changes our economy and our society.
If the pessimists are right, this will be a moment that also fundamentally changes our economy and our society.
It is therefore very worthwhile spending some time deciding if either of them might be right.
by justin
Today I read a paper titled “Optically lightweight tracking of objects around a corner”
The abstract is:
The observation of objects located in inaccessible regions is a recurring challenge in a wide variety of important applications.
Recent work has shown that indirect diffuse light reflections can be used to reconstruct objects and two-dimensional (2D) patterns around a corner.
However, these prior methods always require some specialized setup involving either ultrafast detectors or narrowband light sources.
Here we show that occluded objects can be tracked in real time using a standard 2D camera and a laser pointer.
Unlike previous methods based on the backprojection approach, we formulate the problem in an analysis-by-synthesis sense.
By repeatedly simulating light transport through the scene, we determine the set of object parameters that most closely fits the measured intensity distribution.
We experimentally demonstrate that this approach is capable of following the translation of unknown objects, and translation and orientation of a known object, in real time.
by justin
Today I read a paper titled “Sensor Fusion of Camera, GPS and IMU using Fuzzy Adaptive Multiple Motion Models”
The abstract is:
A tracking system that will be used for Augmented Reality (AR) applications has two main requirements: accuracy and frame rate.
The first requirement is related to the performance of the pose estimation algorithm and how accurately the tracking system can find the position and orientation of the user in the environment.
Accuracy problems of current tracking devices, considering that they are low-cost devices, cause static errors during this motion estimation process.
The second requirement is related to dynamic errors (the end-to-end system delay; occurring because of the delay in estimating the motion of the user and displaying images based on this estimate.
This paper investigates combining the vision-based estimates with measurements from other sensors, GPS and IMU, in order to improve the tracking accuracy in outdoor environments.
The idea of using Fuzzy Adaptive Multiple Models (FAMM) was investigated using a novel fuzzy rule-based approach to decide on the model that results in improved accuracy and faster convergence for the fusion filter.
Results show that the developed tracking system is more accurate than a conventional GPS-IMU fusion approach due to additional estimates from a camera and fuzzy motion models.
The paper also presents an application in cultural heritage context.
by justin
Today I read a paper titled “Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection”
The abstract is:
We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images.
To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose.
This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination.
We then use this network to servo the gripper in real time to achieve successful grasps.
To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware.
Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.
by justin
Today I read a paper titled “Greedy Deep Dictionary Learning”
The abstract is:
In this work we propose a new deep learning tool called deep dictionary learning.
Multi-level dictionaries are learnt in a greedy fashion, one layer at a time.
This requires solving a simple (shallow) dictionary learning problem, the solution to this is well known.
We apply the proposed technique on some benchmark deep learning datasets.
We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state of the art supervised dictionary learning tools like discriminative KSVD and label consistent KSVD.
Our method yields better results than all.
by justin
Today I read a paper titled “Decentralized Optimal Control for Connected and Automated Vehicles at an Intersection”
The abstract is:
In earlier work, we addressed the problem of coordinating online an increasing number of connected and automated vehicles (CAVs) crossing two adjacent intersections in an urban area.
The analytical solution, however, did not consider the state and control constraints.
In this paper, we present the complete Hamiltonian analysis including state and control constraints.
In addition, we present conditions that do not allow the rear-end collision avoidance constraint to become active at any time inside the control zone.
The complete analytical solution, when it exists, allows the vehicles to cross the intersection without the use of traffic lights and under the hard constraint of collision avoidance.
The effectiveness of the proposed solution is validated through simulation in a single intersection and it is shown that coordination of CAVs can reduce significantly both fuel consumption and travel time.
by justin
Today I read a paper titled “Towards the Holodeck: Fully Immersive Virtual Reality Visualisation of Scientific and Engineering Data”
The abstract is:
In this paper, we describe the development and operating principles of an immersive virtual reality (VR) visualisation environment that is designed around the use of consumer VR headsets in an existing wide area motion capture suite.
We present two case studies in the application areas of visualisation of scientific and engineering data.
Each of these case studies utilise a different render engine, namely a custom engine for one case and a commercial game engine for the other.
The advantages and appropriateness of each approach are discussed along with suggestions for future work.
by justin
When you accidentally cut from one camera track to another when editing multi camera footage in Premiere and insert a cut you didn’t want and it is too late to use the undo feature because you have made several more cuts since then, rather than setting the new clip after the cut back to the camera you want, e.g. “camera 1, camera 3, camera 2, camera 1, oops, that was supposed to be camera 1, camera 1, camera 2, camera 1”, you can instead easily delete the incorrect cut by clicking on the cut in the timeline, and hitting the delete key.
So long as you haven’t done a delete/ripple delete of the intervening video the removal of the accidental camera switch/hard cut is completely taken out.
Editing on eight synced cameras my Premiere timeline looks like the forearms of a goth chick at a Bright Eyes concert.
by justin
Today I read a paper titled “Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations”
The abstract is:
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering.
Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world.
However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks.
To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image.
When asked “What vehicle is the person riding?”, computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) in order to answer correctly that “the person is riding a horse-drawn carriage”.
In this paper, we present the Visual Genome dataset to enable the modeling of such relationships.
We collect dense annotations of objects, attributes, and relationships within each image to learn these models.
Specifically, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects.
We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets.
Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answers.
by justin
This month I am studying “InDesign CC interactive document fundamentals”
by justin
Today I read a paper titled “Procedural urban environments for FPS games”
The abstract is:
This paper presents a novel approach to procedural generation of urban maps for First Person Shooter (FPS) games.
A multi-agent evolutionary system is employed to place streets, buildings and other items inside the Unity3D game engine, resulting in playable video game levels.
A computational agent is trained using machine learning techniques to capture the intent of the game designer as part of the multi-agent system, and to enable a semi-automated aesthetic selection for the underlying genetic algorithm.
by justin
Today I read a paper titled “Wayfinding and cognitive maps for pedestrian models”
The abstract is:
Usually, routing models in pedestrian dynamics assume that agents have fulfilled and global knowledge about the building’s structure.
However, they neglect the fact that pedestrians possess no or only parts of information about their position relative to final exits and possible routes leading to them.
To get a more realistic description we introduce the systematics of gathering and using spatial knowledge.
A new wayfinding model for pedestrian dynamics is proposed.
The model defines for every pedestrian an individual knowledge representation implying inaccuracies and uncertainties.
In addition, knowledge-driven search strategies are introduced.
The presented concept is tested on a fictive example scenario.
by justin
Today I read a paper titled “Live-action Virtual Reality Games”
The abstract is:
This paper proposes the concept of “live-action virtual reality games” as a new genre of digital games based on an innovative combination of live-action, mixed-reality, context-awareness, and interaction paradigms that comprise tangible objects, context-aware input devices, and embedded/embodied interactions.
Live-action virtual reality games are “live-action games” because a player physically acts out (using his/her real body and senses) his/her “avatar” (his/her virtual representation) in the game stage, which is the mixed-reality environment where the game happens.
The game stage is a kind of “augmented virtuality”; a mixed-reality where the virtual world is augmented with real-world information.
In live-action virtual reality games, players wear HMD devices and see a virtual world that is constructed using the physical world architecture as the basic geometry and context information.
Physical objects that reside in the physical world are also mapped to virtual elements.
Live-action virtual reality games keeps the virtual and real-worlds superimposed, requiring players to physically move in the environment and to use different interaction paradigms (such as tangible and embodied interaction) to complete game activities.
This setup enables the players to touch physical architectural elements (such as walls) and other objects, “feeling” the game stage.
Players have free movement and may interact with physical objects placed in the game stage, implicitly and explicitly.
Live-action virtual reality games differ from similar game concepts because they sense and use contextual information to create unpredictable game experiences, giving rise to emergent gameplay.
by justin
This month I am studying “Custom textures for retro illustrations”
by justin
Today I read a paper titled “Micro-interventions in urban transport from pattern discovery on the flow of passengers and on the bus network”
The abstract is:
In this paper, we describe a case study in a big metropolis, in which from data collected by digital sensors, we tried to understand mobility patterns of persons using buses and how this can generate knowledge to suggest interventions that are applied incrementally into the transportation network in use.
We have first estimated an Origin-Destination matrix of buses users from datasets about the ticket validation and GPS positioning of buses.
Then we represent the supply of buses with their routes through bus stops as a complex network, which allowed us to understand the bottlenecks of the current scenario and, in particular, applying community discovery techniques, to identify clusters that the service supply infrastructure has.
Finally, from the superimposing of the flow of people represented in the OriginDestination matrix in the supply network, we exemplify how micro-interventions can be prospected by means of an example of the introduction of express routes.
by justin
Today I read a paper titled “Learning to Blend Computer Game Levels”
The abstract is:
We present an approach to generate novel computer game levels that blend different game concepts in an unsupervised fashion.
Our primary contribution is an analogical reasoning process to construct blends between level design models learned from gameplay videos.
The models represent probabilistic relationships between elements in the game.
An analogical reasoning process maps features between two models to produce blended models that can then generate new level chunks.
As a proof-of-concept we train our system on the classic platformer game Super Mario Bros.
due to its highly-regarded and well understood level design.
We evaluate the extent to which the models represent stylistic level design knowledge and demonstrate the ability of our system to explain levels that were blended by human expert designers.
by justin
Today I read a paper titled “A Review of Theoretical and Practical Challenges of Trusted Autonomy in Big Data”
The abstract is:
Despite the advances made in artificial intelligence, software agents, and robotics, there is little we see today that we can truly call a fully autonomous system.
We conjecture that the main inhibitor for advancing autonomy is lack of trust.
Trusted autonomy is the scientific and engineering field to establish the foundations and ground work for developing trusted autonomous systems (robotics and software agents) that can be used in our daily life, and can be integrated with humans seamlessly, naturally and efficiently.
In this paper, we review this literature to reveal opportunities for researchers and practitioners to work on topics that can create a leap forward in advancing the field of trusted autonomy.
We focus the paper on the `trust’ component as the uniting technology between humans and machines.
Our inquiry into this topic revolves around three sub-topics: (1) reviewing and positioning the trust modelling literature for the purpose of trusted autonomy; (2) reviewing a critical subset of sensor technologies that allow a machine to sense human states; and (3) distilling some critical questions for advancing the field of trusted autonomy.
The inquiry is augmented with conceptual models that we propose along the way by recompiling and reshaping the literature into forms that enables trusted autonomous systems to become a reality.
The paper offers a vision for a Trusted Cyborg Swarm, an extension of our previous Cognitive Cyber Symbiosis concept, whereby humans and machines meld together in a harmonious, seamless, and coordinated manner.
by justin
Today I read a paper titled “WalkieLokie: Relative Positioning for Augmented Reality Using a Dummy Acoustic Speaker”
The abstract is:
We propose and implement a novel relative positioning system, WalkieLokie, to enable more kinds of Augmented Reality applications, e.g., virtual shopping guide, virtual business card sharing.
WalkieLokie calculates the distance and direction between an inquiring user and the corresponding target.
It only requires a dummy speaker binding to the target and broadcasting inaudible acoustic signals.
Then the user walking around can obtain the position using a smart device.
The key insight is that when a user walks, the distance between the smart device and the speaker changes; and the pattern of displacement (variance of distance) corresponds to the relative position.
We use a second-order phase locked loop to track the displacement and further estimate the position.
To enhance the accuracy and robustness of our strategy, we propose a synchronization mechanism to synthesize all estimation results from different timeslots.
We show that the mean error of ranging and direction estimation is 0.63m and 2.46 degrees respectively, which is accurate even in case of virtual business card sharing.
Furthermore, in the shopping mall where the environment is quite severe, we still achieve high accuracy of positioning one dummy speaker, and the mean position error is 1.28m.
by justin
Today I read a paper titled “Robust Downbeat Tracking Using an Ensemble of Convolutional Networks”
The abstract is:
In this paper, we present a novel state of the art system for automatic downbeat tracking from music signals.
The audio signal is first segmented in frames which are synchronized at the tatum level of the music.
We then extract different kind of features based on harmony, melody, rhythm and bass content to feed convolutional neural networks that are adapted to take advantage of each feature characteristics.
This ensemble of neural networks is combined to obtain one downbeat likelihood per tatum.
The downbeat sequence is finally decoded with a flexible and efficient temporal model which takes advantage of the metrical continuity of a song.
We then perform an evaluation of our system on a large base of 9 datasets, compare its performance to 4 other published algorithms and obtain a significant increase of 16.8 percent points compared to the second best system, for altogether a moderate cost in test and training.
The influence of each step of the method is studied to show its strengths and shortcomings.
by justin
This month I am studying “Fundamentals of manga digital illustration”
by justin
Today I read a paper titled “Region Based Approximation for High Dimensional Bayesian Network Models”
The abstract is:
Performing efficient inference on Bayesian Networks (BNs), with large numbers of densely connected variables is challenging.
With exact inference methods, such as the Junction Tree algorithm, clustering complexity can grow exponentially with the number of nodes and so computation becomes intractable.
This paper presents a general purpose approximate inference algorithm called Triplet Region Construction (TRC) that reduces the clustering complexity for factorized models from worst case exponential to polynomial.
We employ graph factorization to reduce connection complexity and produce clusters of limited size.
Unlike MCMC algorithms TRC is guaranteed to converge and we present experiments that show that TRC achieves accurate results when compared with exact solutions.
by justin
Have been doing a lot of optimization work for a couple of clients lately and a large majority of my billable hours break down in to “staring at a progress bar”, “figuring out why a progress bar isn’t moving fast enough” and “figuring out why the progress bar no longer moves.”
by justin
This month I am studying “Photoshop one-on-one advanced”
by justin
Today I read a paper titled “A Practical Approach to Spatiotemporal Data Compression”
The abstract is:
Datasets representing the world around us are becoming ever more unwieldy as data volumes grow.
This is largely due to increased measurement and modelling resolution, but the problem is often exacerbated when data are stored at spuriously high precisions.
In an effort to facilitate analysis of these datasets, computationally intensive calculations are increasingly being performed on specialised remote servers before the reduced data are transferred to the consumer.
Due to bandwidth limitations, this often means data are displayed as simple 2D data visualisations, such as scatter plots or images.
We present here a novel way to efficiently encode and transmit 4D data fields on-demand so that they can be locally visualised and interrogated.
This nascent “4D video” format allows us to more flexibly move the boundary between data server and consumer client.
However, it has applications beyond purely scientific visualisation, in the transmission of data to virtual and augmented reality.
by justin
Today I read a paper titled “Research Priorities for Robust and Beneficial Artificial Intelligence”
The abstract is:
Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to investigate how to maximize these benefits while avoiding potential pitfalls.
This article gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.
by justin
This month I am studying “Illustrator CC one-on-one advanced”
Ah, finally found the advanced Illustrator course I was looking for. Time to wrap this up completely.
by justin
Today I read a paper titled “A Collaborative Untethered Virtual Reality Environment for Interactive Social Network Visualization”
The abstract is:
The increasing prevalence of Virtual Reality technologies as a platform for gaming and video playback warrants research into how to best apply the current state of the art to challenges in data visualization.
Many current VR systems are noncollaborative, while data analysis and visualization is often a multi-person process.
Our goal in this paper is to address the technical and user experience challenges that arise when creating VR environments for collaborative data visualization.
We focus on the integration of multiple tracking systems and the new interaction paradigms that this integration can enable, along with visual design considerations that apply specifically to collaborative network visualization in virtual reality.
We demonstrate a system for collaborative interaction with large 3D layouts of Twitter friend/follow networks.
The system is built by combining a ‘Holojam’ architecture (multiple GearVR Headsets within an OptiTrack motion capture stage) and Perception Neuron motion suits, to offer an untethered, full-room multi-person visualization experience.
by justin
Today I finished reading “Hylozoic” by Rudy Rucker
by justin
Today I finished reading “Usagi Yojimbo #30: Thieves and Spies” by Stan Sakai
by justin
Today I finished reading “Fundamentals of Puzzle and Casual Game Design” by Ernest Adams
by justin
Today I finished reading “The Last Dark” by Stephen R. Donaldson
by justin
Today I finished reading “Pieces 7: Hellhound 01 & 02” by Masamune Shirow
by justin
Today I read a paper titled “Empath: Understanding Topic Signals in Large-Scale Text”
The abstract is:
Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them.
We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like “bleed” and “punch” to generate the category violence).
Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction.
Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter.
Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media.
We show that Empath’s data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.
by justin
This month I am studying “Illustrator one-on-one mastery”
Finding the one-on-one classes a bit easy, probably because I already know Illustrator fairly well.
by justin
Today I read a paper titled “Do You See What I Mean? Visual Resolution of Linguistic Ambiguities”
The abstract is:
Understanding language goes hand in hand with the ability to integrate complex contextual information obtained via perception.
In this work, we present a novel task for grounded language understanding: disambiguating a sentence given a visual scene which depicts one of the possible interpretations of that sentence.
To this end, we introduce a new multimodal corpus containing ambiguous sentences, representing a wide range of syntactic, semantic and discourse ambiguities, coupled with videos that visualize the different interpretations for each sentence.
We address this task by extending a vision model which determines if a sentence is depicted by a video.
We demonstrate how such a model can be adjusted to recognize different interpretations of the same underlying sentence, allowing to disambiguate sentences in a unified fashion across the different ambiguity types.
by justin
Today I finished reading “Pieces 8: Wild Wet West” by Masamune Shirow
by justin
Today I read a paper titled “The GPU-based Parallel Ant Colony System”
The abstract is:
The Ant Colony System (ACS) is, next to Ant Colony Optimization (ACO) and the MAX-MIN Ant System (MMAS), one of the most efficient metaheuristic algorithms inspired by the behavior of ants.
In this article we present three novel parallel versions of the ACS for the graphics processing units (GPUs).
To the best of our knowledge, this is the first such work on the ACS which shares many key elements of the ACO and the MMAS, but differences in the process of building solutions and updating the pheromone trails make obtaining an efficient parallel version for the GPUs a difficult task.
The proposed parallel versions of the ACS differ mainly in their implementations of the pheromone memory.
The first two use the standard pheromone matrix, and the third uses a novel selective pheromone memory.
Computational experiments conducted on several Travelling Salesman Problem (TSP) instances of sizes ranging from 198 to 2392 cities showed that the parallel ACS on Nvidia Kepler GK104 GPU (1536 CUDA cores) is able to obtain a speedup up to 24.29x vs the sequential ACS running on a single core of Intel Xeon E5-2670 CPU.
The parallel ACS with the selective pheromone memory achieved speedups up to 16.85x, but in most cases the obtained solutions were of significantly better quality than for the sequential ACS.
by justin
Today I finished reading “A Gentleman of Leisure” by P.G. Wodehouse
by justin
Today I finished reading “Data Science from Scratch: First Principles with Python” by Joel Grus