When you have the best marketing idea, then realise it will cost you $80,000 to buy the three domain names that you want…
Molasses in a Minnesota winter
Windows 10 Pro for Workstations: “I insert a mystical 10 second delay between OS start and when apps can be loaded to ensure a smooth user experience.”
Me: “You run on a machine with 112 cores & 768GB RAM that boots directly from a 512GB battery backed high-speed RAM cache that sits in front of 30TB of RAID0 flash. Why are you inserting a delay?”
Block this you filthy casual
“Y’all got any more of them “block this user” on LinkedIn slots?”
Decades ago, in the “scene” (that would be the warez trading scene) there were the “summer crews” that would crop up like clockwork.
“Summer crews” or “summer groups” were the teenagers home from school on their summer break who got together, formed a warez trading group, spammed the crap out of the scene with their repackaged releases, and then folded within a few months as school went back in session.
You would also see the same cyclic explosion of griefers, noobs and summer guilds on the MUSHs, MOOs, MMOs and hack-n-slash CRPGs with every Summer break. And just as quickly, within a couple of months, it all died down to the usual slow simmer within whatever game you were into at the time.
There was a similar, though tangentially different problem in higher education too as each freshman class rolled in and started accessing the shared online social spaces, but the n00bs lacked the social moires and graces of the local environment which lead to the usual friction between the old hands (people who were in their 2nd year) and the freshmen.
LinkedIn, for the past 18 months, as so many became WFH, turned in to a perpetual “Summer group”/Freshman class, but with adults. At least, I think they’re adults, they get offended by words like “shit” and “fuck” in a professional setting, but then act like entitled, spoiled children when you block them because you aren’t interested in their CRM automated messaging systems they have deployed where it is nothing but pitch, pitch, pitch with every interaction.
The problem with LinkedIn is that everyone is trying to optimize, optimize, optimize. Blast that pitch at as many connections as you can. Use up all your InMails for the month to extract maximum value. Invite as many connections as possible. “It’s a numbers game!”
More.
More.
Faster.
Faster.
Slow down my dude. We’re all going to the same grave.
Don’t measure your productivity in how many connections you spammed, measure it in how many lives you touched in a positive way. You’ll be happier for it. And so will your connections.
Fields of red flags
When you get asked to explain the three month gap in your work history that is on your LinkedIn profile but not your C.V.
From 27 years ago.
I used to do stand-up and improv comedy, and got heckled quite a bit, and got pretty good with off-the-cuff comebacks.
“It just seems rather a long time for a software developer to be out of work these days.” she said.
“Yes, I was waiting for my visa approval so I could emmigrate to the US but thank you for the red flag. Imma put it over here with all the other red flags you’ve given me so far. A few more and it will look like a field of poppies where this job interview will go to die.”
“Wow!” she said.
“I know, it was a pretty good comeback wasn’t it? I’m actually impressed with myself.”
“That is just… wow! So… unprofessional.” she said.
“And that concludes the interview. Thank you for taking the time to see me today.”
Eating your own dog food
Imagine this post is written in a lurid shade of crayon on a circle of pastel coloured paper.
My Saturday:
I overslept
I got to drive a car!
I drank really strong coffee from a cup smaller than your thumb.
I went grocery shopping and bought anything I wanted
I ate a sandwich bigger than my head
I took a nap
I read Asterix comics
I played video games for three hours.
I got to taste my dog’s food.
If you don’t think any three of those things would make for the most awesome Saturday you need to go ask for your child-like wonder to be returned to you.
Don’t let anyone tell you that being
P.S. Y’all looked at your thumb, didn’t you? Made ya look!
P.P.S. That dog food statement is probably going to take a bit of explaining.
Watching – Foundation
Last night I watched the first episode of Foundation from AppleTV.
And…
It. Was. Bad.
The Foundation TV show veers so far from books and Asimov’s vision that it is an entirely different story with minor elements that are similar, e.g. character names and buzzwordy concepts. But other than those similarities it bears little in common. And whilst the writers and all the visionaries of the TV show are quite talented, it takes their entire team to be even a modicum as good as Asimov was at the height of his skills. Foundation ranks right up there with John Carpenter of Mars as yet another bad execution.
Almost every SF book and fantasy book that I have adored and loved over the years that has been adapted to a film or TV show, with the exception of one or two, has just been dreadful. I suspect I will check out the next couple episodes, to see if it gets better, but I’m kinda predisposed to doubting that.
Watching – The Day The Earth Stood Still (2008)
Wow! This was bad.
I am at a loss for words. The film has an apocalyptic”grey goo” scenario in it, and Keanu Reeves as an actor has vastly improved in the past 13 years, but he was wooden in this, and honestly, I don’t think his acting is all that bad, I really think the director just gave him shitty direction.
“See that door Keanu? Be the door! Be the door Keanu!”
I feel sorry for Reeves, in a way, to have to turn in this performance.
All the military/intelligence personnel were war-mongering crackpots and jackasses whose solution to every problem was to shoot it.
And Jaden’s character was simply annoying. Most children in adult films are annoying, but the writers & director just used Jaden’s character to move the story along when they ran out of ideas of how to move the story along.
This remake is not a patch on the original 1951 release.
Watching – Free Guy
It was fun in a run of the mill kind of way.
It was what Ready Player One! should have been.
Free Guy used a lot of recognizable gaming tropes from one aspect that were kind of funny but there was also a lot of “meh, old news” tropes and jokes that just went wide. Maybe if you haven’t ever seen a single gamer oriented movie with in-jokes before they might be kinda funny.
Film was obviously made for a young audience.
You want to know the biggest issue I had with the entire movie? When the villain is destroying the servers at the end. Totally broke suspension of disbelief.
1) You aren’t going to put your servers in your building.
2) They won’t be on the ground floor
3) Redundant servers? Hello? Even without sharding the most basic server architecture wouldn’t work how it was depicted. Or shut down in the way depicted either.
Yeah, I know, my brain can put aside all the other fantastical elements of the film but I get hung up on the server architecture and the server installation.
And yeah, I know main villain took a fireaxe to the computers, but they don’t spark like that when you hit them. They’d probably just keep on trucking.
Frankly I would have just started yanking network cables rather than the dramatic show of violent frustration.
Additional thought: If you watched the trailer, you’ve watched the movie.
Dirty little words
My wife: “Talk dirty to me. En Francais. Did I say that right, en francais?”
Me, with my broken French I have not used in 30 years and having hastily googled key phrases a few years ago that I can now barely pronounce: “Oui. Viens ici ma petite lingette humide. Fascinons-nous avec des fromages affinés à pâte molle.”
My wife: “That’s hot. What did you say?”
Hard to believe I spent more than a year during my 20’s living and working in France with a non-English speaking team, isn’t it?
Paper – Eyes Tell All: Irregular Pupil Shapes Reveal GAN-generated Faces
Today I read a paper titled “Eyes Tell All: Irregular Pupil Shapes Reveal GAN-generated Faces.”
Currently I am pursuing some research into reliable tracking of hands, mouth and eyes, and a colleague surfaced this paper as a “that’s interesting…” side subject. This presents an interesting technique on detecting either doctored, i.e. Photoshopped images or, more importantly, faces that were generated by a computer algorithm.
The abstract is:
Generative adversary network (GAN) generated high-realistic human faces have been used as profile images for fake social media accounts and are visually challenging to discern from real ones. In this work, we show that GAN-generated faces can be exposed via irregular pupil shapes. This phenomenon is caused by the lack of physiological constraints in the GAN models. We demonstrate that such artifacts exist widely in high-quality GAN-generated faces and further describe an automatic method to extract the pupils from two eyes and analysis their shapes for exposing the GAN-generated faces. Qualitative and quantitative evaluations of our method suggest its simplicity and effectiveness in distinguishing GAN-generated faces.
Eyes Tell All Irregular Pupil Shapes Reveal GAN-generated Faces
3D Printer Purchase
The two printers being considered
You can buy a Form3L for around $12,000. The wash station and cure station is another $6,000 which we would purchase later in the year. Plus various accessories. It is precisely what you want, and will work perfectly for everything you want to do with it.
You can buy a Form3 for around $8,000 with all the accessories, and a bunch of resin. It will do everything you want and need.
Making Money
You can also turn the 3D printer in to a little cottage industry by selling the Form3 services on eBay to recoup some of the costs. We will need to figure out a costing & pricing system for doing prints. Register on 100kgarages, and perhaps a few other places. Create a small micro/niche website for the service if you want. Joyce could use it for jewellery design. Printing little figurines. Some stuff on Etsy. And so forth. We’d find uses for it, if there is a will.
Financial Impact
How does either of these purchases impact us financially? It doesn’t affect retirement. It doesn’t affect our savings. It doesn’t affect our life. It just comes down to timing on payouts from my day job and my annuities.
Form3 Financial Impact
If I buy the regular Form3, the only impact it will have is on other stuff I wish to purchase in the next few months. That’s pretty much it. It is about two weeks payouts. It pushes out expansion of the farm to later in the year. It has no impact on our savings, retirement funds, crypto, annuities, lifestyle or our AMEX pay down.
Form3L Financial Impact
If I buy the Form3L, it means I won’t be buying storage for the farm for at least three months. And my workshop tool purchases will take an hiatus for a few months. “You see, the problem with having an unlimited budget isn’t the spending of the money, it’s where do you put all the tools…?” I am effectively spending three week’s income on the printer. Or about three months of toys. It stalls our AMEX pay down by one month, maybe two months, pushing us out to December for final payoff.
Timing of purchase
If I time it just right, e.g. 21st, then everything works out fine and dandy. It also means the AMEX bill isn’t due until 14th of following month. I can either pay down a chunk of the amount owed prior to that, or throw it in to Logix and pay it when due. Essentially I will be paying off the entirety of the cost of the printer 45 days from now, and interest free. Works for me.
Pro Service Plan
Do I really need the $1,000 pro service plan? I don’t care about hot swap printers or pro level walkthrough. I do care about making sure the printer works, and continues to work. But the printer usage will be pretty lightweight unless I am selling prints on-line.
Referral Code
If I use a referral code I get $500 off, which is the cost of shipping to my door.
What if we are declined by AMEX?
The potential that AMEX may not approve the amount is high on the list. In which case we stash the money in to Logix temporarily, use the guy in Pasadena to print my parts, and then just wing it. I am going to probe AMEX now to see if I can get approval for the purchase.
I have just verified that AMEX will approve me for the $15,000 purchase. So now it is simply a matter of timing.
Risks
Losing my job. Though I suspect this is a pretty low risk. The probability is quite low at this time unless I royally fuck up. Losing the job means I have to pay for it from our savings. I’d rather not do that and just pay it off with what I earn.
Second build platform/plate
The Form3 uses a transparent window that is good for so many hundreds of hours of printing. Whilst I don’t expect to use it up rapidly, it does have a finite lifespan, and a new one is $300, so I should plan on obtaining a second build plate with the initial purchase. And then keeping a couple more on the shelf for when I need them.
[Note: I was actually referring to the resin tanks here, I didn’t have the terminology of the parts clear in my head]
Recycling & Reusing
I need to read up and watch videos on best practices for recycling resin & IPA.
Latex gloves
I should keep several boxes of latex gloves on hand, in the basement, near the 3D printer. Tricky during the pandemic, but this too shall pass.
3D Printer Cabinet
I need to build a cabinet on wheels to house the 3D printer. The cabinet should have doors on front and rear to access the printer, but also to keep the printer clean and dust free.
It should have a drawer to keep the resin containers, IPA, tools, accessories, Wash L, Cure L, power bar, UPS, additional material,
Dimensions of Printer & Stations
Form3L dimensions 30.3 × 20.5 × 29.1 in
We need to be aware that the front of the Form3L flips out and up, adding about another 8″ to 10″ to its overall height. When placed on a cart which I estimate to be 36″ tall, it appears the Form3L sits at just over the 6′ mark when opened, so that makes the Form3L about 38″ tall when opened.
I cannot find dimensions for the Cure or Wash stations, but my estimate is the Cure station is about 18″ tall, and the wash station is about 30″ tall, but 48″ tall when the lid is opened. The Wash station is about 36″ wide, the Cure station is about 30″ wide.
These dimensions mean that you cannot really have a single cabinet to hold the printer, the cure station, the wash station, tools, accessories and extra resin and IPA. You would need either two cabinets, or one very wide or deep cabinet. The cabinet would be about 72″ tall, and about 60″ wide. That is a big cabinet.
We could solve this by stacking everything vertically. The Wash station on the bottom, the Form3L in the middle, the Cure station on the top. We do this by making the Wash station sit on a platform that slides out to permit access to the top of the station. So long as you are careful when moving the Wash station on the sliding platform then no sloshing of solvent would occur. This would make the cabinet about 90″ tall. This isn’t a terrible idea.
Power & Network
Runs on 100V to 240V (means it will work in Europe and UK just fine), and consumes 650W so will work on a UPS just fine too.
It has both an ethernet port and WiFi. I suspect it requires connection to the FormLabs cloud and registration with them unfortunately.
Preparing the Workshop
I need to sweep up all the dust in the workshop, finish attaching the lights, finish up the table saw cabinet, organize a lot of stuff in the workshop to make it more functional, and also finish off the duct work. That looks like it is going to Sunday for me. At least for the organizing, sweeping up debris, finishing the duct work and finishing the wiring on the lights. I can also mount the shop fans on the back wall, and I can also install the track lighting above the washer/dryer.
- Install track lighting above washer/dryer
- Attach clamp storage to cabinets (temporary attachments)
- Finish up wiring for shop lights
- Strap duct work to wall and ceiling
- Replace hokey section of duct work that never worked properly
- Sweep up dust and debris
- Organize wood pile
- Organize tools on router table
- Organize MFT
- Assemble and finish up table saw workbench
Buying Without The Pro Service Support
I have reached out to Formlabs to ask if the $1,000 pro service surcharge is absolutely necessary in my particular case as I will only be using it for hobbies. Saving this $1,000 is effectively reducing the cost of the printer by the tax amount.
Referral Codes & Coupons
I can use a referral code, apparently these cannot be used in their online form, that will give me $500 off. Which is the cost of shipping.
None of the coupon websites appear to have a working coupon valid for the website.
Update #1
Printer has been purchased. I was able to ask questions of the sales rep, and also get the $1,000 pro service plan removed. I can purchase an extended warranty, and I have up to 60 days to do that. I will probably purchase that mid-month or maybe even month after as the sales rep he could kind of make that happen if needed. The WashL and CureL are, as expected, having supply chain issues so I couldn’t have ordered those even if I had wanted too. Total cost was just a hair over $14,000. I bought three types of resin, and three separate resin tanks, for grey, black and clear. I also purchased two litres of each resin because the Form3L needs to be loaded up with two 1L bottles of each type of resin because the tanks are so massive. It takes an entire litre to just fill the resin tank.
Update #2 – Printer Arrived
Form3L printer arrived today in the afternoon on a huge pallet with all the accessories and such. Only items that did not arrive was the clear resin.
I took pictures of the printer on the pallet. It was stacked 6ft tall at least. Joyce and I unloaded it, and having previously watched the videos on how to unbox, it went smoothly. I put the printer on the MFT assembly bench for now. I’ll find it a more permanent home later in the year when I build a cabinet for it.
I have sent my first print at just before 10PM, a 20mm XYZ calibration cube.
Cleaning up and tidying up
I was able to clean up my workshop a bit today. Put some tools in to boxes to at least make it a little easier. I put up two clamp brackets on the side of a cabinet. One bracket will hold the big Bessy parallel clamps, and the other bracket holds the regular Bessy clamps. I tried putting the small clamps on the bracket but they just fall right off due to mass. I will probably get a few of the Woodpeckers brackets and those will get attached to the wall, or the side of a more permanent installed cabinet.
Just that brief bit of tidying up made me feel so much better. Just puttering around in the workshop. My therapy.
I think I would like to spend an hour in the workshop tomorrow doing the same thing. Put up more brackets, attach fans to ceiling, and so forth.
Track saw tracks
I also put four of the Fastcap track saw track holders on to the back of the garage door. I had eight of those ready to install. I have space on the back of the garage door for six more additional brackets. Though my brain said “need to buy more” I think I might hold off because I want to make some brackets for the Woodpeckers story sticks, so those will take up some room too.
Cabinet
My intention is to assemble a dedicated 3D printer cabinet over the next couple of weeks. I will make further notes about that beyond the ones mentioned prior in a separate section.
Slow tank fill
The print that is currently taking place states it will take 2 hours or so to finish. But looking at the print, it doesn’t seem to have dispensed that much resin into the tank, so I am unsure what is going on right now. I will stay awake until the print has finished and then I will take a further look. We are currently around layer 40 of 400+ layers.
A few other people have mentioned the slow tank fill. It might be that the bite valve on the resin bottles is not dispensing properly. I can take a look at that, maybe poke it with an xacto blade to get the spice flowing a little faster. I will wait for the print to finish and see what I get.
Labelling the Tank
I need to put the “Grey v4” label directly on the tank. I have put a label on the tank storage box already.
Bulk Solvent
I need to find a place that can sell me IPA and TPM in bulk, and a reasonable cost. TPM is apparently better than IPA, but I think I will stick with IPA for the time being, until I get the WashL and CureL stations, then I will most likely switch over to TPM. I am currently using two pints of IPA in a mason jar with a toothbrush to clean up my prints, which works fine for the small parts I am wanting to print right now. I put them out on the patio table for a few hours to cure, or if I am in the workshop, just outside the workshop door in the afternoon sun and they cure in no time.
3D Printer Cabinet More Thoughts
I already detailed some notes on a cabinet for the 3D printer. Having seen the size of it, and the accessories I need for it, my thought is to build a shallow, wide, tall cabinet, with doors that open to protect everything from dust. But primarily to store all the accessories and resins and such. I am probably going to have a half-dozen or more storage tanks, which are quite large, and probably 20L or more of resin, both in use and in stock ready to go. Figure you will probably keep 4L or 5L of the resin you use regularly, and 2L or 3L of the resin you only use a little bit. Grey, black, clear, white will be 20L of resin right there. Plus Tough, Jewellery and a few others.
I will also keep a couple of boxes of Nitrile gloves. Hand tools. Gallons of IPA or TPM, both in the wash station and also on-hand for refills.
I am also thinking this cabinet should have a place to prop up my laptop, with a power supply to attach to.
I am also thinking that the cabinet should have a UPS to run the printer for a few hours in the event of a power failure. You don’t want a power outage in the middle of a 10 hour print.
I will need to figure out what kind of wattage the printer draws, and if I can get a reasonably priced Li-Po or Ni-Cad UPS to run it for 10+ hours without power.
Where the current wood pile is, or possibly where the rack of non-food items is stored might be a good choice. If I build those cabinets above the washer/dryer then a large majority of those items will migrate there. Paper products, detergents, etc. If I build a pedestal cabinet that sits between washer and dryer, that pedestal could hold all of the bottles of Tide, bottles of bleach, boxes of bounce, and any other laundry supplies. Then the cabinets above would hold paper products, other cleaning supplies.
Storage of resin cartridges
I am thinking a couple of drawers, with special inserts/dividers that snugly hold each resin bottle in an orientation that would prevent them from leaking. The plastic tabs on the bottom of the resin bottles might be reusable, in which case we can put the plugs back in. But I think resin bottles stored on their side, with the valve at the top would work.
This could simply be few large drawers that holds ten or more bottles of resin per drawer, with dividers to hold the resin bottles in place. Plan on four bottles of popular resin of each type in stock, and then three bottles of the less popular resin in stock.
Storage of resin tanks
I am thinking these can be simple cubby holes to store each resin tank. Deep enough to slide a tank in lengthways.
I would want to be able to store five or six resin tanks at the very least. This storage should be a long, shallow drawer, or a drawer with no sides, that holds a resin tank. There is a label on the front of each drawer describing the type of resin tank to be found in there.
Tools drawer
A simple drawer with Kaizen form or similar that holds the various hand tools. Side snips, flush cut japanese saw, perhaps some sand paper, pallet knife for pulling things from the build plate, resin tank cleaning tweezers.
Work surface
A slide out work surface where I can place the build plate or hand tools I am currently using.
Build Plate Storage
Enough storage for two extra build plates. This should probably be a drawer that holds the build plates in kaizen foam slots.
FDM Printer Cubby
A cubby specifically for a Prusa i3/i5 FDM printer.
FDM Printer Accessories
A drawer or other storage area for FDM printer accessories, e.g. build plate.
IPA Storage
A storage drawer for extra 1 gallon IPA bottles.
This should be a storage drawer that can hold 4 gallons or 6 gallons of IPA. Separate dividers for each drawer.
PLA Filament Storage
A storage drawer for PLA filament storage. I assume we are using 8″ spools.
Glove Storage
I need a drawer that holds about a half-dozen boxes of nitrile gloves, along with a box of nitrile gloves currently in use.
Tip Out Trash Can
I should have a trash can that can be tipped out at an angle, and stay in place, that lets me clean up a print without bits getting everywhere.
Thoughts about WashL & CureL
The WashL when opened is 45″ tall, I think we should put the WashL on a slide out, it can then be pulled out, loaded up with a build plate, then slid back in to its cubby hole.
This is the page for the WashL and CureL boxes that contain the dimensions: https://formlabs.com/post-processing/wash-cure/tech-specs/
If I use drawer slides to hold the WashL then I want to make sure that the slides are heavy duty rated for hundreds of pounds, and probably lock in place.
These are the dimensions for the CureL
Printing
First print of a low-detail dimensional test cube 20mm on a side failed. Printed on a raft at 100um layer thickness with Grey V4. No adherence.
I note that the resin did not dispense correctly from either resin container. Very low, sporadic flow.
I am not familiar with the Form3L but I assume it would want to dispense as much as the regular Form3. By the time tank had enough resin in it, the print was at layer 100+. The tank was very dry to start. I aborted the print at layer 200 or so.
Thoroughly cleaned the cured resin from the tank and cleaned off the build platform from the faintest smear of cured resin that remained and I am now trying the print again.
formlabs does not use particularly accurate weight sensors on the resin containers to gauge how much resin is in them on the Form3L. Probably weight sensors with inverse log resistive function so the accuracy drops off markedly the less resin in the container remains.
I am guessing the function that dispenses the resin uses a timer on a valve to gauge how much resin has been dispensed rather than weighing the resin cartridge. So I am guessing the code that estimates how much resin has been dispensed is the same exact code on the Form3, i.e. just a function that waits a certain period of elapsed time.
Now that the resin tank contains enough resin, fingers crossed the second print of the dimensional cube works.
I need to diagnose and read further articles and will open a support ticket if I cannot resolve the following issue:
My dashboard shows that the left resin tank is not installed. But the printer shows that the left resin tank is installed. And it is physically there and fully secured. And I have reseated both resin containers twice.
My dashboard shows that the build platform is not installed. And the printer also indicates that the build platform is not installed. Or rather, the graphic image, and the accompanying text, shows a lack of build platform, but the printer must believe the build platform is there, because if it wasn’t the printer wouldn’t print. The build platform is physically there and fully secured. And I have reseated it and locked it down twice.
I will look for dust or debris in any sensors in the morning when the light in the workshop is better and I am not so tired.
I made sure to open the relief valve on both resin containers during initial installation, and double-checked them during the first failed print to verify I did not forget. I shut the relief valves, removed both resin tanks, verified that the rubber dispense valve was not clogged and could dispense resin, I palpated to verify, reinstalled the resign containers in to their respective slots, and re-opened the relief valves on both containers. I did agitate the resin containers before installation, perhaps not enough.
I am currently running a second print to see if the problem has resolved itself. That said, the dashboard, both in Preform (which is simply a QTWebEngine browser page embedded in the Preform application) and the Formlabs dashboard on the website both indicate that the left resin tank is not installed, but the printer clearly indicates it is and is aware of it. This might just be a minor software issue or a synchronization issue.
Setup instructions on printer failed to mention removing the little bit of orange tape on the relief valve before installing. Setup instructions did mention removing the orange plastic tab on the underside of the resin container situated on the dispense valve.
Self inflicted injury when I smacked my forehead on the build platform locking bracket as I attempted to gently remove the cardboard retainer from the X axis lead screw. I am a dumb arse.
Large UV protective window and touch screen need peelable film, not for any protective reason but wife was very disappointed she did not get to peel off large swathes of sticky protective film like a giant iPhone. Had to console her with peeling off protective film on resin tank storage boxes. I note Adam Savage got peelable film on his Form3L. I feel slighted. Sleighted? Slited?
Instructions lack detail on whether the bit of foam holding the wiper arm during transport in the resin tank can be thrown away. Of course it can be, but there is not clear instruction on whether it should be saved. Neither in the packaging, on the resin storage box, nor, as far as I can ascertain, on the formlabs website.
Printer was exceptionally well packaged (that kind of packaging must cost formlabs an absolute fortune to manufacture) and arrived safely. I especially like the cardboard sling underneath the printer.

Phone keeps hunting for focus under the glare of the shop lights and the light bounce off the touch screen. The printer is printing at the time the photograph was taken, but the printer indicates the build platform is missing, so I am going to assume that’s a software bug. Can clearly see that the printer believes both resin containers are installed, but the dashboard in Preform and on the website thinks the resin container for’ard, closer to the touchscreen, is missing. Both resin containers are securely seated.
Firmware version on machine is rc-1.6.14-369 and was stock from the factory. Connected via WiFi to house network. Preform is 3.22.1 running on Windows 10 Pro for Workstations with Firefox as the browser.
Addendum
The second attempt at the test cube came out flawlessly.
And also, when using Preform, and the locally connected printer view that is on the local network, (not the cloud webview), Preform shows that the resin cartridges are both installed properly, and I of course get the confirmation beep to indicate their correct installation. But both the cloud dashboard in Preform, and the dashboard accessible via a web browser, which is effectively the same thing, show that the left resin tank is missing, so I am going to put that down to a minor software bug or synchronization issue.
Will reboot the printer in the morning and see if it resolves itself. The old “Did you try turning it off an back on?” IT office admin trick.
Addendum Update
Removing the build platform and rebooting the printer cleared the “build platform not present” issue, and also cleared the “resin tank not present” issue on the cloud/website dashboard. So I am thinking there is probably some sync issues that the backend API has, that as a customer, there isn’t much I can do about except to quietly ignore. I am sure it’ll get resolved eventually in a future software patch. So long as the printer prints, I’m happy.
I’m running some other parts on the printer today, and other than keeping an eye on the resin dispensing issue, I suspect those prints will work fine.
I sometimes pick up on inconsequential details that I really shouldn’t be concerning myself with – “Why does the onscreen graphic mentioning the removal of the orange tabs on the resin cartridge only indicate the underside tab, but the text uses a plural form to indicate both dispense valve tab and relief valve tab?” or “why is there some splattering of resin on the inside of the flip up door during printing, is that normal? Is the tank too full?” or “those leveling feet need to be a 1/2″ longer if you install in a garage with a sloped floor” or “that wifi antenna is too close to the USB to prevent cross-talk and anttenuation” or “blue highlighted stripes on places where the resin tank inserts and the build platform inserts, oh, the build platform has a magnetic catch as well as a locking lever” or “other unboxing videos showed that the touch screen and UV window had plastic coverings, mine doesn’t, is that a change in manufacturing process or a mistake?” or “how did they determine the universal size for nitrile gloves to pack in? I wonder how the supply chain issues affected the number of gloves they include. I notice that a early-2021 printer used blue nitrile gloves, mine uses black, I wonder if it was always black but the pandemic caused them to switch suppliers.”
I like the Form3L. It’s well built. It’s well packaged. A real “slick” product. Even the website and all the technical support materials. The printer is expensive, but it is a quality piece of hardware and software. I will be reaching out to formlabs within a week or two to order the WashL and CureL devices.
Update #3
I need to verify I am up to date on firmware in the printer before I print any further. 3D printers are temperamental beasts, so teething troubles are to be expected. The printer firmware could use a little polish, I think there might some “someone forgot to disable the sleep timer during a print job” kind of bugs.
“3D printing, it’s like woodworking, but I don’t have to be there to operate the power tools.”
I’ve run two more prints…
First print:
I manually filled a resin tank with about 500ml of black resin. Then loaded a full resin cartridge in to the right hand resin catridge slot, and the now half-empty resign cartridge in to the left cartridge slot.
So the set up is, 50% full black resin v4 in left slot, 100% full black resin v4 in right slot, resin tank with about 500ml of resin in it.
I start a 20 hour print.
About 2 hours in to the print the LCD screen switches itself off and becomes non-responsive, but the printer continues to print. Sleep mode maybe? I did set the “sleep mode timer to 30 minutes.” But sleep timer engaged the middle of the print?
About four hours in to the print the web dashboard stops receiving any updates from the printer, but the Preform software continues to receive updates.
At five hours in to the print, at the 10% mark, the build plate raises up to about the midway point, and the printer informs my Preform software that the resin cartridge is low and there is not enough resin to complete the job.
Print job failed when the material supports gave out, even though the rafts themselves had good adhesion to the build plate. I understand that the build plate will raise up if there is cured resin adhered to the film in the resin tank and the wiper/mixer drag arm magnetically disconnects from the LPU. I cleared the film of cured resin but unfortunately there was no way for me to anything but manually cycle power on the printer, which on reboot gave me an “Error 293” which is, from reading the tech support documentation, a generic error code of “damned if I know.”
What was disconcerting was even though there is a full black v4 resin in the right cartridge slot, which the printer is aware of. The printer was pretty insistent that the print would not finish to due running out of ink. 1L full right cartridge, 500ml left cartridge, 500ml in the tank. Form3L is obviously not aware how much resin is in the tank. This is after the print had failed, so I cannot tell if the build plate raising up and halting was due to the failed print with cured resin adhered to the FEP film in the tank, or the low resin issue, I suspect the cured resin. During all of this the LCD screen remains blank and unresponsive.
So I was 4+ hours in to a 20 hour print, an unresponsive, blank LCD screen, a printer that isn’t communicating with the dashboard, a printer kvetching there isn’t enough ink to finish a print that should take around 230ml of ink, when there is clearly a full 1L cartridge installed, and about 300ml of resin left in the other cartridge (even though the printer is saying there is 130ml, I think my electronic scale in the workshop is probably a bit more sensitive than the sensor in the Form3L), and the printer is paused and I have no way to make it resume the print because the dashboard doesn’t provide for that functionality, nor does it communicate with the printer reliably, and the only controls on the printer are behind a blank, non-responsive touch screen. The LED back light behind the LCD, and the illuminated formlabs logo were lit, the build chamber was warm, and the interior lights were on and I could ping the printer.
I suspect what happened is that the print failed, the build platform retracted due to adhered resin on the FEP film, and it sat there for about 30 minutes (I wasn’t present) and then the sleep timer kicked in and the machine went to sleep, but because the print had failed, there was no way to bring the machine out of sleep state. Effectively I was locked out of the machine until power cycle.
Second print:
After clearing the build plate and thoroughly cleaning the resin, I switched the now mostly empty resing cartridge that was on the left with the completely full cartridge that was on the right. So now the configuration is 1L full on the left, about 300ml on the right. I disabled sleep timer, and I reran the exact same print. Which printed all the way through.
I printed six identical storage organizer parts, two of which failed. Both parts failed in the same way, but at different print layers, but both for the same reason, the supports themselves gave way. Fortunately the cured resin did not adhere to the film in the tank and the print was able to finish. I was using the “beta” supports option in Preform, on very narrow parts, so I suspect that had something to do with it. Interestingly the printer drained the right cartridge to empty, and continued on printing without any warnings about possibly running out of resin. I wonder if the software has a test condition that doesn’t check the resin cartridges, or handle switchover, in a consistent manner. Easy mistake to make in the code.
I will run some more test prints later today, that are of a shorter duration, and only single parts and see what I get.
Update #4
For me it is “this is interesting, that’s interesting, ooh! what made that fail? Ooh! How does that work?” Will be ordering the CureL and WashL in a month or two. Also, I am going to put some of my problems down to “User was over-caffeinated and excited about using the product and did not carefully read the instructions.”
Note to self on lesson learned: Manually fill resin tank when deploying a fresh tank to increase chance of successful print.
Second note to self: Design & print a bracket that will let me invert a resin cartridge over the resin tank and hold the cartridge securely so I can wander off and do something else whilst the resin tank is being maually filled.
Update #5 – Prints of darkness
I have used up my two liters of black resin (prints of darkness, get it?) and almost the entire two liters of grey resin. I need to order more. Made a drive cage to insert into my computer workstation to hold the SSDs due to the PSU being honking huge.
Experimenting with a fan cowling that will hold a radiator for the crypto miner server. Still experimenting with that. Also experimented with snap lock tab fittings and I’ve got to say, those are not easy to design.
Designed and printed a couple of other small parts for organizational projects around the home. Created some tabs to let me mount the ISDT battery chargers to the wall.
I may need to investigate cheaper printing options for prototypes.
Update #6 – “Financial impact – What’s the worst that could happen?”
I just switched jobs. I am no longer with Ericsson. I have moved on to a VR hardware start-up that is doing much more exciting work. No financial impact but definitely one of those “pause for thought” moments.
Update #7
Have been so tired and stressed with work I fell asleep at the marble table again. New job is keeping me busy and work/life balance is out of whack. Enjoying the work, not enjoying the hours. Waiting for a printer job to finish up and then heading to bed. I don’t think I am going to get to use my printer very much for a while due to workload and stress with the job. Everything seems to be falling apart in my hands.
Hippos and Ducks Are Verbotten
Gold Medal Whiner
A textbook case of how to lose the interest of a candidate in 5 seconds and get yourself blocked on LinkedIn.
Me: “Thank you for making me aware of this opportunity. I don’t work on anything military related, any product or service designed to maim or kill, or anything that can be easily weaponized. Good luck in your continuing candidate search.”
Recruiter: “Oh really? So what makes you so high and mighty? Must be nice to be able to turn down that much money.”†
Recruiter got himself blocked so fast he should try out for the Olympics.
† It was $120K.
Can I Have A Menacing Ring Tone Too?
Protected: Mercurial
A Firewall Made of Stucco
I run the Adobe Creative Cloud crap out of necessity rather than any desire to use that terrible software.
Many of the Adobe apps insist on phoning home for a variety of reasons, and not just to verify proper software licenses.
A few of the Adobe products, Indesign for instance, will 100% crash on launch with an inscrutable error if it cannot phone home.
If I drop the firewall, Indesign launches just fine.
I’ll be damned if I am going to drop my firewall for a piece of software to launch, and I’ll be damned if I will let a piece of software phone home for analytics and crash if it cannot. Doesn’t exactly instill any confidence in me when you software crashes because it cannot talk to some remote server somewhere.
The absolutely absurd thing is, I can redirect the IP of the AWS server , in this case lcs-robs.adobe.io, which is listed under the Adobe documentation as being used for the “Admin Console” https://www.adobe.com/devnet-docs/acrobatetk/tools/AdminGuide/endpoints.html that InDesign is trying to reach to my loopback, and InDesign works just fine. InDesign literally crashes because it cannot open the network port.
Achievement Unlocked – Get an education
I don’t think I ever posted this. This one’s for you Mum & Dad. I miss you guys so much.
Seriously!?!
Why the fuck would I want emojis in IntelliJ/Pycharm/Webstorm/etc?
Just piss off already.
Go spend all those hours you took implementing this stupid feature and dedicate it to fixing the bugs that prevent the IDE from working properly, you know, like that super important one where Jetbrains search is totally fucking broken.
Paper – On the expressive power of deep neural networks
Today I read a paper titled “On the expressive power of deep neural networks”
The abstract is:
We study the expressivity of deep neural networks with random weights.
We provide several results, both theoretical and experimental, precisely characterizing their functional properties in terms of the depth and width of the network.
In doing so, we illustrate inherent connections between the length of a latent trajectory, local neuron transitions, and network activation patterns.
The latter, a notion defined in this paper, is further studied using properties of hyperplane arrangements, which also help precisely characterize the action of the neural network on the input space.
We further show dualities between changes to the latent state and changes to the network weights, and between the number of achievable activation patterns and the number of achievable labelings over input data.
We see that the depth of the network affects all of these quantities exponentially, while the width appears at most as a base.
These results also suggest that the remaining depth of a neural network is an important determinant of expressivity, supported by experiments on MNIST and CIFAR-10.
Paper – Philosophy in the Face of Artificial Intelligence
Today I read a paper titled “Philosophy in the Face of Artificial Intelligence”
The abstract is:
In this article, I discuss how the AI community views concerns about the emergence of superintelligent AI and related philosophical issues.
Paper – Bandit-Based Random Mutation Hill-Climbing
Today I read a paper titled “Bandit-Based Random Mutation Hill-Climbing”
The abstract is:
The Random Mutation Hill-Climbing algorithm is a direct search technique mostly used in discrete domains.
It repeats the process of randomly selecting a neighbour of a best-so-far solution and accepts the neighbour if it is better than or equal to it.
In this work, we propose to use a novel method to select the neighbour solution using a set of independent multi- armed bandit-style selection units which results in a bandit-based Random Mutation Hill-Climbing algorithm.
The new algorithm significantly outperforms Random Mutation Hill-Climbing in both OneMax (in noise-free and noisy cases) and Royal Road problems (in the noise-free case).
The algorithm shows particular promise for discrete optimisation problems where each fitness evaluation is expensive.
Invoking patient law
NFTs of images are the equivalent of two children on the playground shouting “You can’t say my words back to me, I copyrighted them!” and the other kid screaming “Yeah? Well I trademarked them!”
Hung for sheep as for a lamb
Thinking about robbing a computer store and stealing a GPU as it will be cheaper to cover bail than pay a scalper.
Paper – Characterization of a Multi-User Indoor Positioning System Based on Low Cost Depth Vision (Kinect) for Monitoring Human Activity in a Smart Home
Today I read a paper titled “Characterization of a Multi-User Indoor Positioning System Based on Low Cost Depth Vision (Kinect) for Monitoring Human Activity in a Smart Home”
The abstract is:
An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security.
Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle.
This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect).
A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems.
The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy.
This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community.
Paper – A Diagram Is Worth A Dozen Images
Today I read a paper titled “A Diagram Is Worth A Dozen Images”
The abstract is:
Diagrams are common tools for representing complex concepts, relationships and events, often when it would be difficult to portray the same information with natural images.
Understanding natural images has been extensively studied in computer vision, while diagram understanding has received little attention.
In this paper, we study the problem of diagram interpretation and reasoning, the challenging task of identifying the structure of a diagram and the semantics of its constituents and their relationships.
We introduce Diagram Parse Graphs (DPG) as our representation to model the structure of diagrams.
We define syntactic parsing of diagrams as learning to infer DPGs for diagrams and study semantic interpretation and reasoning of diagrams in the context of diagram question answering.
We devise an LSTM-based method for syntactic parsing of diagrams and introduce a DPG-based attention model for diagram question answering.
We compile a new dataset of diagrams with exhaustive annotations of constituents and relationships for over 5,000 diagrams and 15,000 questions and answers.
Our results show the significance of our models for syntactic parsing and question answering in diagrams using DPGs.
Paper – Enhanced Twitter Sentiment Classification Using Contextual Information
Today I read a paper titled “Enhanced Twitter Sentiment Classification Using Contextual Information”
The abstract is:
The rise in popularity and ubiquity of Twitter has made sentiment analysis of tweets an important and well-covered area of research.
However, the 140 character limit imposed on tweets makes it hard to use standard linguistic methods for sentiment classification.
On the other hand, what tweets lack in structure they make up with sheer volume and rich metadata.
This metadata includes geolocation, temporal and author information.
We hypothesize that sentiment is dependent on all these contextual factors.
Different locations, times and authors have different emotional valences.
In this paper, we explored this hypothesis by utilizing distant supervision to collect millions of labelled tweets from different locations, times and authors.
We used this data to analyse the variation of tweet sentiments across different authors, times and locations.
Once we explored and understood the relationship between these variables and sentiment, we used a Bayesian approach to combine these variables with more standard linguistic features such as n-grams to create a Twitter sentiment classifier.
This combined classifier outperforms the purely linguistic classifier, showing that integrating the rich contextual information available on Twitter into sentiment classification is a promising direction of research.
Paper – Gearbox Fault Detection through PSO Exact Wavelet Analysis and SVM Classifier
Today I read a paper titled “Gearbox Fault Detection through PSO Exact Wavelet Analysis and SVM Classifier”
The abstract is:
Time-frequency methods for vibration-based gearbox faults detection have been considered the most efficient method.
Among these methods, continuous wavelet transform (CWT) as one of the best time-frequency method has been used for both stationary and transitory signals.
Some deficiencies of CWT are problem of overlapping and distortion ofsignals.
In this condition, a large amount of redundant information exists so that it may cause false alarm or misinterpretation of the operator.
In this paper a modified method called Exact Wavelet Analysis is used to minimize the effects of overlapping and distortion in case of gearbox faults.
To implement exact wavelet analysis, Particle Swarm Optimization (PSO) algorithm has been used for this purpose.
This method have been implemented for the acceleration signals from 2D acceleration sensor acquired by Advantech PCI-1710 card from a gearbox test setup in Amirkabir University of Technology.
Gearbox has been considered in both healthy and chipped tooth gears conditions.
Kernelized Support Vector Machine (SVM) with radial basis functions has used the extracted features from exact wavelet analysis for classification.
The efficiency of this classifier is then evaluated with the other signals acquired from the setup test.
The results show that in comparison of CWT, PSO Exact Wavelet Transform has better ability in feature extraction in price of more computational effort.
In addition, PSO exact wavelet has better speed comparing to Genetic Algorithm (GA) exact wavelet in condition of equal population because of factoring mutation and crossover in PSO algorithm.
SVM classifier with the extracted features in gearbox shows very good results and its ability has been proved.
Paper – Font Identification in Historical Documents Using Active Learning
Today I read a paper titled “Font Identification in Historical Documents Using Active Learning”
The abstract is:
Identifying the type of font (e.g., Roman, Blackletter) used in historical documents can help optical character recognition (OCR) systems produce more accurate text transcriptions.
Towards this end, we present an active-learning strategy that can significantly reduce the number of labeled samples needed to train a font classifier.
Our approach extracts image-based features that exploit geometric differences between fonts at the word level, and combines them into a bag-of-word representation for each page in a document.
We evaluate six sampling strategies based on uncertainty, dissimilarity and diversity criteria, and test them on a database containing over 3,000 historical documents with Blackletter, Roman and Mixed fonts.
Our results show that a combination of uncertainty and diversity achieves the highest predictive accuracy (89% of test cases correctly classified) while requiring only a small fraction of the data (17%) to be labeled.
We discuss the implications of this result for mass digitization projects of historical documents.
Paper – Expected Similarity Estimation for Large-Scale Batch and Streaming Anomaly Detection
Today I read a paper titled “Expected Similarity Estimation for Large-Scale Batch and Streaming Anomaly Detection”
The abstract is:
We present a novel algorithm for anomaly detection on very large datasets and data streams.
The method, named EXPected Similarity Estimation (EXPoSE), is kernel-based and able to efficiently compute the similarity between new data points and the distribution of regular data.
The estimator is formulated as an inner product with a reproducing kernel Hilbert space embedding and makes no assumption about the type or shape of the underlying data distribution.
We show that offline (batch) learning with EXPoSE can be done in linear time and online (incremental) learning takes constant time per instance and model update.
Furthermore, EXPoSE can make predictions in constant time, while it requires only constant memory.
In addition, we propose different methodologies for concept drift adaptation on evolving data streams.
On several real datasets we demonstrate that our approach can compete with state of the art algorithms for anomaly detection while being an order of magnitude faster than most other approaches.
Paper – Model-driven Simulations for Deep Convolutional Neural Networks
Today I read a paper titled “Model-driven Simulations for Deep Convolutional Neural Networks”
The abstract is:
The use of simulated virtual environments to train deep convolutional neural networks (CNN) is a currently active practice to reduce the (real)data-hungriness of the deep CNN models, especially in application domains in which large scale real data and/or groundtruth acquisition is difficult or laborious.
Recent approaches have attempted to harness the capabilities of existing video games, animated movies to provide training data with high precision groundtruth.
However, a stumbling block is in how one can certify generalization of the learned models and their usefulness in real world data sets.
This opens up fundamental questions such as: What is the role of photorealism of graphics simulations in training CNN models? Are the trained models valid in reality? What are possible ways to reduce the performance bias? In this work, we begin to address theses issues systematically in the context of urban semantic understanding with CNNs.
Towards this end, we (a) propose a simple probabilistic urban scene model, (b) develop a parametric rendering tool to synthesize the data with groundtruth, followed by (c) a systematic exploration of the impact of level-of-realism on the generality of the trained CNN model to real world; and domain adaptation concepts to minimize the performance bias.
Paper – The Singularity May Never Be Near
Today I read a paper titled “The Singularity May Never Be Near”
The abstract is:
There is both much optimism and pessimism around artificial intelligence (AI) today.
The optimists are investing millions of dollars, and even in some cases billions of dollars into AI.
The pessimists, on the other hand, predict that AI will end many things: jobs, warfare, and even the human race.
Both the optimists and the pessimists often appeal to the idea of a technological singularity, a point in time where machine intelligence starts to run away, and a new, more intelligent species starts to inhabit the earth.
If the optimists are right, this will be a moment that fundamentally changes our economy and our society.
If the pessimists are right, this will be a moment that also fundamentally changes our economy and our society.
It is therefore very worthwhile spending some time deciding if either of them might be right.
Paper – Optically lightweight tracking of objects around a corner
Today I read a paper titled “Optically lightweight tracking of objects around a corner”
The abstract is:
The observation of objects located in inaccessible regions is a recurring challenge in a wide variety of important applications.
Recent work has shown that indirect diffuse light reflections can be used to reconstruct objects and two-dimensional (2D) patterns around a corner.
However, these prior methods always require some specialized setup involving either ultrafast detectors or narrowband light sources.
Here we show that occluded objects can be tracked in real time using a standard 2D camera and a laser pointer.
Unlike previous methods based on the backprojection approach, we formulate the problem in an analysis-by-synthesis sense.
By repeatedly simulating light transport through the scene, we determine the set of object parameters that most closely fits the measured intensity distribution.
We experimentally demonstrate that this approach is capable of following the translation of unknown objects, and translation and orientation of a known object, in real time.
Paper – Sensor Fusion of Camera, GPS and IMU using Fuzzy Adaptive Multiple Motion Models
Today I read a paper titled “Sensor Fusion of Camera, GPS and IMU using Fuzzy Adaptive Multiple Motion Models”
The abstract is:
A tracking system that will be used for Augmented Reality (AR) applications has two main requirements: accuracy and frame rate.
The first requirement is related to the performance of the pose estimation algorithm and how accurately the tracking system can find the position and orientation of the user in the environment.
Accuracy problems of current tracking devices, considering that they are low-cost devices, cause static errors during this motion estimation process.
The second requirement is related to dynamic errors (the end-to-end system delay; occurring because of the delay in estimating the motion of the user and displaying images based on this estimate.
This paper investigates combining the vision-based estimates with measurements from other sensors, GPS and IMU, in order to improve the tracking accuracy in outdoor environments.
The idea of using Fuzzy Adaptive Multiple Models (FAMM) was investigated using a novel fuzzy rule-based approach to decide on the model that results in improved accuracy and faster convergence for the fusion filter.
Results show that the developed tracking system is more accurate than a conventional GPS-IMU fusion approach due to additional estimates from a camera and fuzzy motion models.
The paper also presents an application in cultural heritage context.
Paper – Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection
Today I read a paper titled “Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection”
The abstract is:
We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images.
To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose.
This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination.
We then use this network to servo the gripper in real time to achieve successful grasps.
To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware.
Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.
Paper – Greedy Deep Dictionary Learning
Today I read a paper titled “Greedy Deep Dictionary Learning”
The abstract is:
In this work we propose a new deep learning tool called deep dictionary learning.
Multi-level dictionaries are learnt in a greedy fashion, one layer at a time.
This requires solving a simple (shallow) dictionary learning problem, the solution to this is well known.
We apply the proposed technique on some benchmark deep learning datasets.
We compare our results with other deep learning tools like stacked autoencoder and deep belief network; and state of the art supervised dictionary learning tools like discriminative KSVD and label consistent KSVD.
Our method yields better results than all.
Paper – Decentralized Optimal Control for Connected and Automated Vehicles at an Intersection
Today I read a paper titled “Decentralized Optimal Control for Connected and Automated Vehicles at an Intersection”
The abstract is:
In earlier work, we addressed the problem of coordinating online an increasing number of connected and automated vehicles (CAVs) crossing two adjacent intersections in an urban area.
The analytical solution, however, did not consider the state and control constraints.
In this paper, we present the complete Hamiltonian analysis including state and control constraints.
In addition, we present conditions that do not allow the rear-end collision avoidance constraint to become active at any time inside the control zone.
The complete analytical solution, when it exists, allows the vehicles to cross the intersection without the use of traffic lights and under the hard constraint of collision avoidance.
The effectiveness of the proposed solution is validated through simulation in a single intersection and it is shown that coordination of CAVs can reduce significantly both fuel consumption and travel time.
Paper – Towards the Holodeck: Fully Immersive Virtual Reality Visualisation of Scientific and Engineering Data
Today I read a paper titled “Towards the Holodeck: Fully Immersive Virtual Reality Visualisation of Scientific and Engineering Data”
The abstract is:
In this paper, we describe the development and operating principles of an immersive virtual reality (VR) visualisation environment that is designed around the use of consumer VR headsets in an existing wide area motion capture suite.
We present two case studies in the application areas of visualisation of scientific and engineering data.
Each of these case studies utilise a different render engine, namely a custom engine for one case and a commercial game engine for the other.
The advantages and appropriateness of each approach are discussed along with suggestions for future work.
Death by a thousand non-life threatening cuts
When you accidentally cut from one camera track to another when editing multi camera footage in Premiere and insert a cut you didn’t want and it is too late to use the undo feature because you have made several more cuts since then, rather than setting the new clip after the cut back to the camera you want, e.g. “camera 1, camera 3, camera 2, camera 1, oops, that was supposed to be camera 1, camera 1, camera 2, camera 1”, you can instead easily delete the incorrect cut by clicking on the cut in the timeline, and hitting the delete key.
So long as you haven’t done a delete/ripple delete of the intervening video the removal of the accidental camera switch/hard cut is completely taken out.
Editing on eight synced cameras my Premiere timeline looks like the forearms of a goth chick at a Bright Eyes concert.
Paper – Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations
Today I read a paper titled “Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations”
The abstract is:
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering.
Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world.
However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks.
To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image.
When asked “What vehicle is the person riding?”, computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) in order to answer correctly that “the person is riding a horse-drawn carriage”.
In this paper, we present the Visual Genome dataset to enable the modeling of such relationships.
We collect dense annotations of objects, attributes, and relationships within each image to learn these models.
Specifically, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects.
We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets.
Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answers.
Studying – InDesign CC interactive document fundamentals
This month I am studying “InDesign CC interactive document fundamentals”
Paper – Procedural urban environments for FPS games
Today I read a paper titled “Procedural urban environments for FPS games”
The abstract is:
This paper presents a novel approach to procedural generation of urban maps for First Person Shooter (FPS) games.
A multi-agent evolutionary system is employed to place streets, buildings and other items inside the Unity3D game engine, resulting in playable video game levels.
A computational agent is trained using machine learning techniques to capture the intent of the game designer as part of the multi-agent system, and to enable a semi-automated aesthetic selection for the underlying genetic algorithm.
Paper – Wayfinding and cognitive maps for pedestrian models
Today I read a paper titled “Wayfinding and cognitive maps for pedestrian models”
The abstract is:
Usually, routing models in pedestrian dynamics assume that agents have fulfilled and global knowledge about the building’s structure.
However, they neglect the fact that pedestrians possess no or only parts of information about their position relative to final exits and possible routes leading to them.
To get a more realistic description we introduce the systematics of gathering and using spatial knowledge.
A new wayfinding model for pedestrian dynamics is proposed.
The model defines for every pedestrian an individual knowledge representation implying inaccuracies and uncertainties.
In addition, knowledge-driven search strategies are introduced.
The presented concept is tested on a fictive example scenario.
Paper – Live-action Virtual Reality Games
Today I read a paper titled “Live-action Virtual Reality Games”
The abstract is:
This paper proposes the concept of “live-action virtual reality games” as a new genre of digital games based on an innovative combination of live-action, mixed-reality, context-awareness, and interaction paradigms that comprise tangible objects, context-aware input devices, and embedded/embodied interactions.
Live-action virtual reality games are “live-action games” because a player physically acts out (using his/her real body and senses) his/her “avatar” (his/her virtual representation) in the game stage, which is the mixed-reality environment where the game happens.
The game stage is a kind of “augmented virtuality”; a mixed-reality where the virtual world is augmented with real-world information.
In live-action virtual reality games, players wear HMD devices and see a virtual world that is constructed using the physical world architecture as the basic geometry and context information.
Physical objects that reside in the physical world are also mapped to virtual elements.
Live-action virtual reality games keeps the virtual and real-worlds superimposed, requiring players to physically move in the environment and to use different interaction paradigms (such as tangible and embodied interaction) to complete game activities.
This setup enables the players to touch physical architectural elements (such as walls) and other objects, “feeling” the game stage.
Players have free movement and may interact with physical objects placed in the game stage, implicitly and explicitly.
Live-action virtual reality games differ from similar game concepts because they sense and use contextual information to create unpredictable game experiences, giving rise to emergent gameplay.
Studying – Custom textures for retro illustrations
This month I am studying “Custom textures for retro illustrations”
Paper – Micro-interventions in urban transport from pattern discovery on the flow of passengers and on the bus network
Today I read a paper titled “Micro-interventions in urban transport from pattern discovery on the flow of passengers and on the bus network”
The abstract is:
In this paper, we describe a case study in a big metropolis, in which from data collected by digital sensors, we tried to understand mobility patterns of persons using buses and how this can generate knowledge to suggest interventions that are applied incrementally into the transportation network in use.
We have first estimated an Origin-Destination matrix of buses users from datasets about the ticket validation and GPS positioning of buses.
Then we represent the supply of buses with their routes through bus stops as a complex network, which allowed us to understand the bottlenecks of the current scenario and, in particular, applying community discovery techniques, to identify clusters that the service supply infrastructure has.
Finally, from the superimposing of the flow of people represented in the OriginDestination matrix in the supply network, we exemplify how micro-interventions can be prospected by means of an example of the introduction of express routes.
Paper – Learning to Blend Computer Game Levels
Today I read a paper titled “Learning to Blend Computer Game Levels”
The abstract is:
We present an approach to generate novel computer game levels that blend different game concepts in an unsupervised fashion.
Our primary contribution is an analogical reasoning process to construct blends between level design models learned from gameplay videos.
The models represent probabilistic relationships between elements in the game.
An analogical reasoning process maps features between two models to produce blended models that can then generate new level chunks.
As a proof-of-concept we train our system on the classic platformer game Super Mario Bros.
due to its highly-regarded and well understood level design.
We evaluate the extent to which the models represent stylistic level design knowledge and demonstrate the ability of our system to explain levels that were blended by human expert designers.
Paper – A Review of Theoretical and Practical Challenges of Trusted Autonomy in Big Data
Today I read a paper titled “A Review of Theoretical and Practical Challenges of Trusted Autonomy in Big Data”
The abstract is:
Despite the advances made in artificial intelligence, software agents, and robotics, there is little we see today that we can truly call a fully autonomous system.
We conjecture that the main inhibitor for advancing autonomy is lack of trust.
Trusted autonomy is the scientific and engineering field to establish the foundations and ground work for developing trusted autonomous systems (robotics and software agents) that can be used in our daily life, and can be integrated with humans seamlessly, naturally and efficiently.
In this paper, we review this literature to reveal opportunities for researchers and practitioners to work on topics that can create a leap forward in advancing the field of trusted autonomy.
We focus the paper on the `trust’ component as the uniting technology between humans and machines.
Our inquiry into this topic revolves around three sub-topics: (1) reviewing and positioning the trust modelling literature for the purpose of trusted autonomy; (2) reviewing a critical subset of sensor technologies that allow a machine to sense human states; and (3) distilling some critical questions for advancing the field of trusted autonomy.
The inquiry is augmented with conceptual models that we propose along the way by recompiling and reshaping the literature into forms that enables trusted autonomous systems to become a reality.
The paper offers a vision for a Trusted Cyborg Swarm, an extension of our previous Cognitive Cyber Symbiosis concept, whereby humans and machines meld together in a harmonious, seamless, and coordinated manner.
Paper – WalkieLokie: Relative Positioning for Augmented Reality Using a Dummy Acoustic Speaker
Today I read a paper titled “WalkieLokie: Relative Positioning for Augmented Reality Using a Dummy Acoustic Speaker”
The abstract is:
We propose and implement a novel relative positioning system, WalkieLokie, to enable more kinds of Augmented Reality applications, e.g., virtual shopping guide, virtual business card sharing.
WalkieLokie calculates the distance and direction between an inquiring user and the corresponding target.
It only requires a dummy speaker binding to the target and broadcasting inaudible acoustic signals.
Then the user walking around can obtain the position using a smart device.
The key insight is that when a user walks, the distance between the smart device and the speaker changes; and the pattern of displacement (variance of distance) corresponds to the relative position.
We use a second-order phase locked loop to track the displacement and further estimate the position.
To enhance the accuracy and robustness of our strategy, we propose a synchronization mechanism to synthesize all estimation results from different timeslots.
We show that the mean error of ranging and direction estimation is 0.63m and 2.46 degrees respectively, which is accurate even in case of virtual business card sharing.
Furthermore, in the shopping mall where the environment is quite severe, we still achieve high accuracy of positioning one dummy speaker, and the mean position error is 1.28m.
Paper – Robust Downbeat Tracking Using an Ensemble of Convolutional Networks
Today I read a paper titled “Robust Downbeat Tracking Using an Ensemble of Convolutional Networks”
The abstract is:
In this paper, we present a novel state of the art system for automatic downbeat tracking from music signals.
The audio signal is first segmented in frames which are synchronized at the tatum level of the music.
We then extract different kind of features based on harmony, melody, rhythm and bass content to feed convolutional neural networks that are adapted to take advantage of each feature characteristics.
This ensemble of neural networks is combined to obtain one downbeat likelihood per tatum.
The downbeat sequence is finally decoded with a flexible and efficient temporal model which takes advantage of the metrical continuity of a song.
We then perform an evaluation of our system on a large base of 9 datasets, compare its performance to 4 other published algorithms and obtain a significant increase of 16.8 percent points compared to the second best system, for altogether a moderate cost in test and training.
The influence of each step of the method is studied to show its strengths and shortcomings.
Studying – Fundamentals of manga digital illustration
This month I am studying “Fundamentals of manga digital illustration”