I'm going to attempt a small 🧵 to explain ADHD in computer terms. Imagine your brain is a computer; you have working memory (RAM), long term memory (hard drive), processors (frontal cortex) and an operating system.
The operating system has a process manager which takes care of all the things. Some are daemons and run constantly on dedicated chips like breathing and sensory processing. Others need to be managed to interact with the CPU (the frontal cortex) to be active.
Most humans perform a type of preemptive multitasking where processes can interrupted and scheduled based on I/O waits, CPU needs, and timeslices. You can work on something until you have to go to the bathroom, or you need to eat, or just because it's time to stop.
In ADHD brains the interrupt mechanism is different. It wants to do something, anything really, and when it sees CPU and I/O dipping it assumes a new process needs to be spawned and given priority. It also will randomly terminate processes.
Most computers will defer to the user for what process has priority, the ADHD manager attempts to round robin task switch across all the processes without regard for what the user wants or needs.
Imaging you're using a email app and suddenly out of nowhere minesweeper opens and moves to the front. You didn't ask for it, but here it is, then as soon as you finish one level it opens a new browser tab and navigates to the wikipedia article about the Ottoman Empire.
There are also times when the interrupt mechanism will hand all priority to a single process and refuse to send scheduled processes to the CPU, need to eat? pee? sleep? No! Link can't save Zelda on his own, he needs me!
This style of processing isn't broken, it's just a different pattern. There are times when this task switching is beneficial as not all tasks really need to be given priority. Which is why ADHD folk tend to thrive in chaotic environments that stress out others. ]]>
Let me start by saying that I don’t hate mocking frameworks. They can be incredibly useful, and from an intellectual standpoint they’re some of the most amazing and interesting code you can look at. If you want a masters class on reflection and runtime manipulation go dig around in them.
What I do dislike is how they’re misused. I consider myself a classicist or a member of the Chicago/Detroit school of mocking. That means I don’t mock often, and when I do I prefer to use hand rolled mocks because I find they’re better at letting me test state as opposed to interactions. Still, sometimes mocking frameworks can be damn handy and I’m not above using them. More often they’re part of an existing application that I’m working on. The main problem I see with them is when developers don’t stay SRY with their mocking. Every time you mock a method that’s a little dab of glue making it harder to change that method. I get sad when I come into a test and see a chunk of mocking repeated over and over in every test.
User user = mock(User.class);
when(session.getUser()).thenReturn(user);
when(result.getName()).thenReturn(“Stacy”);
You will often find that these same lines are repeated throughout the system in many different tests, sometimes many times the number of invocations in the production code. If you want to change getUser()
or getName()
you may find that difficult, not due to the prod code but due to the mocker. I've litterally spent days refactoring tests and mocking statements so I could change some method automatically.
We can get around this the same way we stay DRY in our production code, through encapsulation and creating common utilities. In the above example, if this happens in one test we can easily extract a method for it. If it’s in many tests we can do the same but put it outside. I sometimes end up with a set of utilities for common methods. Something like this:
public class SessionExpects {
public static User currentUser(UserRepo userRepo, String name){
User user = mock(User.class);
when(session.getUser()).thenReturn(user);
when(result.getName()).thenReturn(name);
return user;
}
}
Now we can have only one place where these methods are called and if we need to change them it should be a lot easier. We also maintain the existing tests and other usages of the mocking framework. So it’s really low risk, low change, all we are doing is not repeating ourselves, which seems simple, and yet, maybe not obvious.
]]>Subscribe: Apple Podcasts | Android | RSS
]]>Yes! Of course it is. Where the author fails is in narrowly defining math and engineering to traditionally male dominated professions. Historically and across cultures women have dominated domestic crafts. Weaving, knitting, embroidery, tailoring, basket and pottery making are all forms of “thing thinking” that require high-level problem solving, mathematics and a deep understanding of materials. Yet we do not value them at the same level of men’s work. This is partially due to the more temporal and replaceable nature of these crafts but it’s also because we do not value women at the same level we value men. So we call the things men do “engineering” and the work women do “crafts”. The fact that men’s traditional crafts involve stone and metal, and women’s involve clay and yarn is irrelevant to the skills needed or women’s interest in them. Just ask Hobby Lobby.
Weaving involves configuring complex repeating patterns of information into a machine that renders that information to new forms. (Sound familiar?)Yes! Of course it is. When we program we have several different audiences. The compiler or runtime is one. Other developers that must read and understand our program is another. Some of the worst code I’ve ever worked on was written by coders with poor communication skills. Naming things is hard, composing code into a readable flow is hard. By applying basic composition skills to our code we can help those that come after us.
Larry Wall, the creator of the massively influential Perl programming language has spoken and written extensively on the connection between human linguistics and programming. Perl itself was informed and designed with linguistic principles.
Yes! Of course it is. As all programmers who have been around a while will tell you. Writing the right code is much harder than writing the code right. Programmers must be part of the process of gathering requirements and empathising with customers. We have to work hard to ask the right questions, give meaningful critiques, and understand the problems that need to be solved.
Maybe? My personal feeling is that part of the problem of getting women and men who do not conform to stereotypical “Big Bang Theory” nerdom is in the public perception and marketing of the career. We are working against a harmful and counterproductive vision of the coder as a socially awkward genius sitting in a dark room. Most programming isn’t writing router software or physics engines. Developing software for humans requires a high degree of general purpose problem solving, teamwork and empathy. Many of the best programmers I’ve worked with did not come into programming through traditional educational paths and I’m not convinced that grouping programming with math and engineering is beneficial to it from a marketing perspective. Perhaps it should be in business, design, or even on it’s own.
We are also working against a toxic and misogynistic culture that drives the women who do want to engage out. The most baffling thing about the manifesto is it’s choice to basically ignore the voices of women who will tell you why they left. It’s not a mystery we need brain scans to find out. Just ask.
]]>A much better thing to measure are the forces hampering our teams from delivering. I like to think of this as viscosity. In science, viscosity is a measurement for how fluid a liquid is. Water flows faster and easier than honey. Similarly, our teams will go as fast as they are capable of. The real issue is finding what is slowing them down.
I came up with an entirely unscientific method for calculating a team’s viscosity. In the course of a team delivering something to their customer:
Easy huh? Your goal is a value of 1. Obviously not all values of 1 are equal but it should give you a target to work on. You are free to play with the point system. Maybe dependencies outside of your company or area cost more?
Teams that are self sufficient are going to be faster or at least be more responsible for their own speed. What would it take to get a team with all of the skills and people needed to deliver?
Of course teams impact other teams. Viscosity is all about how a liquid moves in relation to itself. What’s really fun is to take all of the viscosity points for an organization and cross them together. So rather than just run the points of each team, inherit the points from your dependencies. So if your team is dependent on a team with a viscosity of 5 then you now also have 5 as well (plus whatever else).
Map this out and you will start to see the big bottlenecks of your organization. You could create a nice dependency graph and watch as it explodes. Teams that should be really fast suddenly look slow because of a web of other slow teams (usually built to “support” them).
]]>One of our developers (@briandanenhauer) felt the there was something wrong with that time and used one of our hackathons to do something about it. He ran the tests under a profiler and found a significant problem with the way we (and Fitnesse) were wiring in fixtures.
Fitnesse has a Import fixture which can be used to import the java packages containing your fixtures. For example:
|Import|
|info.fitnesse.fixturegallery|
|info.fitnesse.anotherPackage|
Our project had attempted to package our fixtures into feature specific packages. Over time this had left us with over 50 fixture sub-packages.
Whenever Fitnesse needed to find a fixture it would loop over the list of imports and look in each package for the fixture. If it didn’t find it it would throw an exception up and then continue looking until it found it. This was resulting in literally millions of exceptions being thrown throughout the run of the main suite.
@briandanenhauer did the simple thing and flattened all of our fixtures into one package and one corresponding row in the import fixture.
The the result was dramatic. A full build of our system went from 13-14 minutes to 6-7! The team was floored and there was much rejoycing! I did a very quick and dirty calculation on the savings in dev time and came up with $200,000 per year for our staff. That would be if every developer ran verify only once per workday (and we know it’s more). That’s a powerful argument for hackathons or just letting your developers have time to make their projects better.
]]>I can remember when I made that tweet and I was thinking less about if programming itself was engineering and more about if programmers were engineers. My father was an architect who designed prisons, schools and shopping malls (ok, all prisons). My father-in-law is a mechanical engineer who worked for the US Army Corps of Engineers on top secret projects like the stealth bomber. So I am loath to call myself an architect or an engineer in their presence.
As my father once said when I told him I was considering taking a job as a software architect: “pssh, you’re not an architect until you prove to the state that your work isn’t going to kill someone”
That’s an important point. Architects and engineers (at least structural, mechanical and civil) are required to go through a rigorous education, licensing and accreditation systems. They are legally liable for their work and they are keenly aware that they have the public’s lives in their hands.
In software development you can take a 3 week Node.js bootcamp from a 22 year old and get a job writing financial systems.
If programming is engineering. How do we get programmers to act like engineers (i.e. professionals)?
There is an almost unlimited demand for programmers that need to write everything from missile guidance systems to cheap Candy Crush knock-offs and we seem to have almost no control at all over how these developers are educated. The universities don't teach the art of programming. Most employers don't either. I love the craftsman movement but so far it only exists in it's own little alternate reality bubble.
It occurred to me while watching Glenn that the attitude I (and many others) have had of deriding programming as engineering serves to feed into the idea that writing crap software is ok. Perhaps if we reorient a little towards calling our practice engineering it would help foster the professionalism many of us long for.
]]>Everyone enjoys trolling JavaScript for it's weirdness but everyone has something. Here's Java pic.twitter.com/SszNlbefLP
— Ryan Bergman (@ryber) February 17, 2015
What I got in response (besides the retweets and favs) was a lot of people who felt the need to inform me of why each line was the way it was (and in some cases how stupid I was for not knowing it). I started to experience a phenomenon that is quite common amongst software developers that I like to call “Wonderland Syndrome”.
“But I don’t want to go among mad people," Alice remarked.
"Oh, you can’t help that," said the Cat: "we’re all mad here. I’m mad. You’re mad."
"How do you know I’m mad?" said Alice.
"You must be," said the Cat, "or you wouldn’t have come here.”
― Lewis Carroll, Alice in Wonderland
Apart from Alice and the Cheshire Cat nobody in Wonderland knew that they were mad. This attitude, to simply accept the rules a system has given to you whether they are logical and good or not is actually a strength in computer programming. In a paper from Middlesex University it was found that successful CS majors were better able to accept and understand the sometimes odd rules of computers.
“To write a computer program you have to come to terms with this, to accept that whatever you might want the program to mean, the machine will blindly follow its meaningless rules and come to some meaningless conclusion”
It's also why we all love puns so much. The problem comes when this understanding moves into an orthodoxy around how it should be. None of the examples in the tweet show this more than the response to the last item. Some people seemed incensed that I apparently didn’t understand that prefixing a number with 0 indicated an octal number and that this is how it was in C and many other languages. "0 is the standard!"
0 is a horrible thing to use to indicate octal. My 3rd grader can tell you that leading zeros are not significant and so 022 - 2 = 20
. Why must we surprise everyone with something different? Maybe 43 years ago when C was being created on PDP-8s with 12 bit words it was the only thing to do. I tend to think that even then anything else would have been better.
Fast forward to 1995 and Java had no reason at all to continue with it. I believe they did so simply because of Wonderland Syndrome. "Of course octals start with 0, and hedgehogs make perfect croquet balls."
Yet here we are, running caucus-races to get dry and fixing bugs because of the limitations of a 43 year old computer. THAT my friends, is the WTF.
]]>You see there are two rules about microservices. 1) They need to be isolated and 2) They need to be more isolated than that. In fact they need to be Kim Jong-un isolated. When you run a microservice as it’s own deployable behind it’s own REST interface then it’s easy. You can use whatever libraries, languages, even operating systems you want. However, when deploying a jar inside of another application you are suddenly no longer free. The runtime will demand only one version of your orgs favorite MVC for example, and everyone better be on the same page.
So when crafting a jar you need to be dependent on as little as possible. I personally find freedom from frameworks liberating. Besides it’s a “MICRO” service, you don’t need an IOC framework or an ORM at all. In practice I can see many organizations having problems with this, Green developers like glueing frameworks together. Things like Spring make it look like it would be easy to just add yet another jar into the component scan. You need to stop! Because that leads to the dark side….dependency, coupling, and weeks spent upgrading 20 jars at once to Spring 4.1.X when just one of them needs to go.
I know Uncle Bob knows all this already. I’m not sure he emphasized it enough or appreciates how many people will attempt to implement his idea in completely wrong ways.
But I’m going to do it anyway.
]]>Several years ago some hipster programmers were frustrated by HTML. Since dogs and babies were qualified to write HTML they weren't able to let everyone else know how awesome they were on 100% of a project. So they invented HAML to protect their smartypants status.
The main battle cry of HAML seems to be that you can write HTML “faster”. As if writing HTML was the bottleneck of programming. If your main bottleneck is writing HTML then either you are the the most awesome programmer on the face of the planet and you need to quit writing web sites and start finding a cure for ebola or you don’t work on anything remotely difficult (lucky you). Thankfully every text editor on earth autocompletes HTML. Which means HAML is a problem disguised as a solution to a problem that doesn’t exist.
(note that I don’t feel the same about SAAS or LESS which at least help to fix some of the major problems with the W3C’s biggest failure: CSS)
Hibernate, Rails, iBatis, whatever. ORMs do two things: 1) Save you from writing a bunch of mundane crap early in a project and 2) Guarantee you will spend oodles of time trying to debug a labyrinthian hellhole later in the project. This is the root of what I like to call the the "Law Of Frameworks" which states
Any framework designed to keep you from
thinking about a thing will force you to
have to think about that thing in more
difficult ways than if you had not used it.
I've seen team after team, project after project waste weeks to months fighting ORM's. Stop the madness!
See HAML. Use JSON.
Try to think of the most convoluted and stupid way to make a web request. Then make it dumber than that and require at least eight classes to do it. This is HttpClient. This is why people hate Java.
There is nothing worse than a method suddenly working differently in your production code than it does in tests. This is what AOP does. Making it the antithesis of the Principle of least astonishment. ORM’s often make use of AOP which should be a sign. As does the last item.
Spring is the most cargo-culted software framework in the world. Almost nobody understands the entire thing. At best people learn a small bit for a little while. Then they leave it alone for 6 months and forget. The framework itself it a giant magical mudball that configures itself with annotations and XML (yes I know about the Java configs but those are not 100%, nor are they really much better). Almost nothing is obvious or simple. StackOverflow is littered with questions about completely obscure, unhelpful, and downright weird errors. The answers (if they have one) are often something along the lines of “oh you forgot to implement and register the AbstractSingletonProxyFactoryBean: (which is a convenient proxy factory bean superclass for proxy factory beans that create only singletons of course). The answer almost never explains what the hell any of this means. Just worship the coke bottle correctly and the bean will show up with candy on the spring equinox!
If ever you choose to upgrade Spring you will find that half the classes you used before have been replaced by other classes that don’t do the same things.
This is the other reason people hate Java.
]]>The day before he had made a commit and pushed it to the server but now the content of the commit was gone. Not reverted mind you, just gone, like it had never happened even though the commit was clearly still in history.
To be clear, looking at the history of the entire repo showed the commit and it’s changes as something that happened. But looking at any of the individual files in the commits didn’t show the commit at all. WTF?!
This turned out to be the result of a bad merge by another developer. I was able to recreate the scenario, take a look at the weird history:
ryber$ git log --graph --oneline --all
* 1b4cd92 Bad merge by dev B
|\
| * e879eb6 This is the missing commit by dev A
* | 93933b9 commit by dev B
|/
* 6baed99 root commit
Here we can see that e879eb6 is in history. You can see that part of that commit was a change to foo:
ryber$ git whatchanged e879eb6
commit e879eb6007ddef2a955a71651bcf31a25727b510
Author: ryber
Date: Sat Mar 22 16:37:28 2014 -0500
This is the missing commit by dev A
:100644 100644 eb314de... e3525a0... M baz
:100644 100644 ae3cab0... cf561bd... M foo
Yet, if we look at the history of foo that e879eb6 is missing!
ryber$ git log --pretty=oneline --abbrev-commit -- foo
6baed99 root commit
What happened here? Where did e879eb6 go in the history of foo? I can understand if the change was reverted but shouldn't we see some history of that revert? This is where we get into the bad merge
You may have notice that the missing commit includes another change to the “baz” file. It turns out that the second dev also changed baz in 93933b9 and was forced to go into a merge conflict when he pulled. To someone new to git the merge process might be a bit shocking. This is because you see all of the changes impacted by the merge. This includes your own changes as well as all of the changes to files in the tree you are merging in that happened after your last common ancestor. Developer B was presented with something like this when he was merging:
ryber$ git status
On branch master
Your branch and 'origin/master' have diverged,
and have 1 and 1 different commit each, respectively.
(use "git pull" to merge the remote branch into yours)
You have unmerged paths.
(fix conflicts and run "git commit")
Changes to be committed:
modified: foo
Unmerged paths:
(use "git add <file>..." to mark resolution)
both modified: baz
You might think “What the hell? I didn’t change foo?! Why is foo here?”. You might even attempt to get rid of the foo changes … which is exactly what happened. It’s actually kind of hard to do from the shell but fairly easy to do from some gui tools like SourceTree. From the shell you just have to issue a checkout of a previous version like this:
ryber$ git checkout HEAD^ -- foo
Then an add, and a commit and voilà ! The file has now been reverted to it’s previous state as part of a merge and it’s individual content history no longer contains the missing commit.
You may be wondering how we got out of this mess? We simply cherry-picked the commits back into the head. Not very subtle but it worked.
ryber$ git cherry-pick e879eb6
]]>
In the late 80’s and early 90’s as I was getting into hacker culture InSoc was a major influence for me and my friends. They have everything: songs inspired by William Gibson’s cyberpunk novels, love songs to Nikola Tesla and IBM, samples from Star Trek, and a sound that gave the impression it was made on an Amega at 3:00 am after they left a rave. Best of all they regularly encoded messages and hacker challenges into analog data-sound tracks at the end of their albums. The minute a band has you wiring your CD player into an old handset modem they’ve won.
It may be cliche for a programmer to love Kraftwerk, but I don’t care. More than just a best-of “The Mix” is rare in the world of electronic re-mix albums in that it’s mostly better than the originals. I particularly find the version of “Computer Love” to be the best out there.
Don’t chalk up Robyn as just another disposable dance club act. The electronic music she produces is impeccably produced and well crafted. It has numerous interesting patterns and layers that pay back on repeated plays. The Body Talk series is by far the best. For something even more abstract check out her work with Röyksopp.
Whenever I listen to Santigold I imagine this is the music the the rastafarian hackers from space station Zion are listening to in Neuromancer.
The soundtrack was made first, and then the movie was basically a giant music video for a remix of Carmina Burana. Perfect background music for epic coding.
The reason we didn’t have any of that is because our computers could only do one thing and that was run a BASIC interpreter. We didn’t have “hard drives” and only the fancy pants kids had disk drives of any kind. On my block you turned on your computer, you programmed it to do something, and then you turned it off, and when you did your program went away. I used to program games and then keep my computer running for weeks so I could keep playing it. Eventually I did get a cassette tape recorder that could save data. It saved my games sometimes and would load the ones it did save...sometimes. I’d say it had about a 30% success rate at doing anything successfully. The rest of the time you just re-wrote the program again.
Back in those days we didn’t have the “internet”, so there was no “Stack Overflow” or “GitHub” where you could find out how to code. You had to subscribe to a magazine like Byte that had programs written into the back of it. You would copy the entire program by hand and then spend several hours trying to figure out where you copied it wrong. Then once you finally had something that worked you could experiment by changing lines. That's how we learned.
If you did want to program anything more complicated than printing your name over and over you had to plan how many lines you thought a section would take beforehand. You see we didn’t have “classes” or “functions”. We just had blocks of code that took up a section of numbered lines. You thought “I think the code necessary to draw this sprite will take up 10 lines”. So you set aside lines 500-599 because you knew you were always wrong my at least a factor of 10. And if you needed more after that you had to keep the rest of the lines somewhere else because 600 was probably already taken. You couldn’t just lazily hit return 20 times and have everything move for you. It just didn’t work that way and besides, we couldn’t waste valuable bytes on luxuries like carriage returns!
You always kept a notebook next to you where you documented where the lines of various subroutines were because the computer sure as hell wasn’t going to help you. All you could do was list the program, or list a range of line numbers. “But why don’t you just pipe it through grep” you say? Because I ain’t some damn college professor that’s why! These are home computers dag-nabbit not some million dollar Unix server from Ma Bell. We didn’t have “grep” or “sed” or “pipes”. We had LIST...and we liked it!
My first computer was a TI 99/4a. It had a 16 bit 3 MHz processor and came with 256 bytes of RAM and 16kb of extended VRAM. That was more than enough for any of the programs I wrote at home. I made my own knock offs of games like Centipede, Pong and Bezerk. Just let that sink in for a moment. Centipede, written in BASIC, in under 16kb of RAM. When Bill Gates supposedly said that nobody would ever need more than 640kb of ram we all believed him because that was an INSANE amount of memory. At the time I couldn’t even fathom what I would possibly use that much memory for. You can only run one program at a time anyway.
We take for granted how luxurious software development is today. We have endless supplies of ram and disk. It’s easy to forget that not long ago we had almost nothing and yet we did amazing things, and made amazing messes of our code. Some things never change. Happy New Year and GET OFF MY LAWN!
]]>Her main question was: “What do developers really want out of professional opportunities that they might not be getting with their current employers? To put it another way, what are the things that entice you guys?”
That’s an interesting question. What DO we want? Why do some employers have a bad rap in town while others seem to have their pick of the top developers?
First off I want to dispel one myth. The recruiter followed her question up with “My inclination is that money is a secondary factor when seeking out new grounds for professional growth”. Yes and no. Money is not the ONLY factor but it certainly is a large factor and it can easily disqualify an employer from a job search. Especially for the top talent in town who may be accustomed to a certain lifestyle. We all live in this same economy. We have families and children we want to send to college. We want to take that trip to New Zealand. We want a nice home. Money is a factor. Your job as an employer is to make sure that it’s not a negative when I’m considering you. It’s quite insulting and sad when you meet someone who thinks a beer fridge and a foosball table somehow makes up for low salaries. It doesn’t.
That said, while pay is a factor for why we might NOT take a job it’s not an indicator of why I might take a job, or stay at a job. As long as the pay is in a competitive range the other factors come to the front. So what are those factors?
“Self-Determination”. People want to feel like they are contributing to solutions. They want to bring value to their clients. The absolute last thing they want is to feel like some cog responsible for implementing someone else's design. This is why top-down architecture is so detrimental to good teams. I want some amount of influence over the design, the architecture, and the tools. I want to feel like when I come to work people want me to be there because they value my contributions, opinions and the code I write. Most importantly, if something is in my way, or is inefficient, and I have a better way to solve it, I want the power to solve it. Nothing is more frustrating than being told. “Yea, we know it sucks, but we have to do it that way because [corporate policy, Joffrey the “Architect” said so, we always did it that way, derp].”
Customer feedback: I want to know that the work I’m doing matters. Make sure developers have the opportunity to interact with customers, even if that is part of market research or trade conferences. Ideally I want constant feedback from a product owner who works with the development staff every day and makes sure we are working on the right things.
A cool project with cool technologies: As developers we like cool things and cool toys. I realize sometimes you’re an insurance company and there is little you can do to make your app more exciting. That just means you need to get creative. Sorry but nobody wants to work on a legacy struts app running on Websphere.
XP: I can’t speak for everyone on this but I think I speak for a growing majority. I will not consider a job where XP practices are not followed and embraced. Particularly BDD/TDD and CI. I’m also cool if you’re not there yet and you want me to help you get there. Just don’t be wishy washy about it. XP is the one exception to rule #1. Everyone needs to get on board. We are professionals, act like it and take your craft seriously.
So yes, those test frameworks do suck, but they give you something that unit test frameworks just aren’t designed for. Whenever I see “acceptance tests” written in a unit test framework they just look like really poorly written unit tests, So take the extra time and use the right tool for the right job.
I have seen one exception to this. At Agile 2012 I attended a talk from Liz Keogh about writing BDD tests in a domain specific syntax from within a unit test framework. I thought it was a excellent idea and I have used the style on my own personal projects. I want to emphasize that what Liz has done is not entirely dissimilar to traditional ATDD frameworks. Her data, and criteria are absolutely separate from the underlying code. Even going so far as to abstract away top level controllers and domain objects behind fixtures.
Here is an example of the "Keogh Style" testing from the unit test class. Note that those are all static methods of an external fixture class. In many ways it is no different than the text file side of Cucumber. All of the work of dealing with the underlying classes is behind the fixture leaving the jUnit methods with nothing but simple asserts.
@Test
public void canCountRegistrations() {
givenCourse("abc", "Underwater Basketweaving");
registerUserForCourse("abc", "barry");
registerUserForCourse("abc", "gary");
assertEquals(2, getRegistrationCount("abc"));
registerUserForCourse("abc", "larry");
registerUserForCourse("abc", "larry");
assertEquals(3, getRegistrationCount("abc"));
}
You can find more examples in this project.
]]>6 months ago my father passed away from a sudden heart attack. While it was a shock, it was not surprising given his lifestyle choices. I decided that I couldn’t change my genetics so whatever else happens, I was not going to contribute to an early death the way my dad did. I didn't go into this specifically to lose weight, I did it for a healthy life and losing weight has been a side effect of that.
I’m a computer programmer. Not exactly the kid of job that encourages a healthy lifestyle. So I came up with a few rules:
I “work out” 30 minutes a day, every day. “Working out” can be defined as pretty much any physical activity that raises the heart rate. So a long afternoon of yard work counts. When I was in the middle of dropping weight, rather than just maintaining like I do now, I was doing 60 minutes a day broken into two 30 minute chunks.
The most important part of this was at night. My old ritual was to put the kids to bed and then have a big glass of wine (or two) with my ass on the couch. That’s a lot of calories to consume and then sleep on. My new ritual was to cut out the nighttime wine (I still have a glass with dinner), and to not eat anything within 3 hours of bed. I started out riding an exercise bike after the kids went down. The bike has some nice variety of 30 minute programs which I could do while watching Dr. Who.
I also joined a gym. I picked a Planet Fitness that’s close to my work. It’s super cheap at only $10 a month. I go there for a quick 30 minute work out before lunch or in the morning. They have this thing called “The Arc” which became my favorite. I also do strength training. You can burn more calories with more muscle mass so don’t skimp on the weights and do nothing but cardio.
Keep in mind I’m not planning on doing a marathon or anything. I’m not pushing my body to it’s limits on this stuff most of the time. I’m just raising my heat rate to a point and getting a good sweat going. That’s all it has to be. Make it a part of your life, like brushing your teeth.
I have not been following any particular diet. I kind of pick and choose what I want. The main goal has been to simply make healthy choices and do so all the time. If there has been one diet that I’ve followed more than any other it would be the “mediterranean diet”. This is pretty easy for me because I love greek and middle eastern food. I also allow myself to eat as many raw fruits and vegetables as I want. You can’t get fat on raw apples...trust me. Here are some of the guidelines:
A note on buying fruits and vegetables: We have four kids and with this diet we go through a lot of produce. If we were shopping at the regular grocery store we would go broke quickly. I highly suggest that if you want to do this to go join someplace like Costco or Sam's Club. You can get a lot of produce for not much money.
Good Luck!
]]>Here is my unscientific experiment. Same screen with Intellij put into a dark and light screen and angled such that a light is reflecting back at the camera:
Now anyone familiar with C# knows that you can split one class over several different files by using the partial keyword. It's really a pretty horrible thing to do there is a very very limited scope for it being a good idea. Generated code like web service stubs are often partial so you can add to them without extending them. Other than that partials are super crappy. They make code hard to read and understand and they encourage classes to get way too big. In fact a partial sometimes shows up in code when a class gets so big that people want a quick and dirty way to make it look smaller.
Anyway this project is Java and so can't do partials. At least you would hope not. Yet there I was looking at three classes. We can call them Larry, Moe and Curly. These three were all basically the same class with some different methods. They had the same dependencies, they took and returned the same classes, did similar things and even had similar names. On top of that they were all held by a big model class that used them interchangeably calling one and passing it's data into the others.
So here I was working in Moe and finding that I needed the functionality of Curly. I was also getting confused about which class did what due to the similarity.
The "fix" of course was to:
Merge them together. This resulted in a pretty huge class which I was uncomfortable with but at least it solved the ambiguous confusion.
Simplify the model's use of the code to just let the new big class handle the back and forth with it's own methods. This actually resulted in a lot of methods being removed or made private.
Extract smaller specialty classes that deal with unique things. I'm still doing this step. This is always the hard part but if you look carefully you can find the classes hidden in there. Pay particular attention to feature envy.
I ended up with a single class that is smaller than the three from before and something that's easier to read and understand. It’s still too big for my taste but It’s better than what was there before...at least for now.
]]>But what if it had played out differently? In the early running Tcl was a favorite but was ruled out for not being “Javaish” enough. What if it had been picked? I imagine we would be in about the same situation we are today. It would have been dismissed in the early days as underpowered and slow. Then as browsers became faster and tools like “tclQuery” became more robust it would have gained in popularity. Eventually as the Stockholm syndrome set in someone would extract Google’s Tcl engine into a server side platform and hipsters everywhere would form start-ups using “Node.Tcl”.
JavaScript didn’t get where it is today because it’s a great language. It got here because it was the only choice. If browsers had ever had the ability to natively and universally run something else as a first class language things might be different today. Will we ever have the opportunity to use anything else? I’m not sure, legacy browsers are a harsh mistress. I do look forward to Ecmascript 6. Here’s hoping it’s not another 10 years before I can safely use it.
]]>Now with Jekyll-Bootstrap your directory structure probably looks something like this:
Hal9000:ryber.github.com ryber$ ls
404.html _includes archive.html changelog.md sitemap.txt
README.md _layouts assets index.md tags.html
Rakefile _plugins atom.xml pages.html
_config.yml _posts categories.html rss.xml
Your site and the Jekyll code are interwoven. You probably have it all on one branch, and when you push you push the entire thing.
This is not the case with the standard Octopress layout. It looks more like this:
Hal9000:ryber.github.com ryber$ ls
CHANGELOG.markdown _config.yml public
Gemfile _deploy sass
config.rb source
README.markdown config.ru
Rakefile plugins
Everything you see here with the exception of the _deploy
folder (which is listed in the .gitignore
will be kept on a source
branch in git. The contents of the _deploy
folder will be your production branch. Almost all of your work will go into the source
folder.
In order to get to the right state we are going to be a little sneaky as we move things around.
You will need at least version 1.9.3 of ruby. If you don't have it yet I suggest installing RVM.
After installing make sure you have 1.9.3 like this:
rvm install 1.9.3
rvm use 1.9.3
rvm rubygems latest
OK, so what we need to do now in order to get everything into the right place is to move the "main" directory into a branch and keep the master in a subdirectory.
So first make a backup copy of our current state. and then go into the original.
cp -r ryber.github.com/ old.ryber.github.com
cd ryber.github.com
next we need to make a source
branch where our Jekyll is going to live. After doing that delete all of the content from the branch.
git checkout -b source
Switched to a new branch 'source'
git rm -r *
git commit -m "clearned out everything from branch"
Next we are going to get the octopress content and copy it into our directory without the git history.
cd ..
git clone https://github.com/imathis/octopress.git octopress
cd octopress
git archive master | tar -x -C ../ryber.github.com
cd ../ryber.github.com
git add .
git commit -m "added octopress content"
Now lets make sure we have a working octopress directory. When you CD'd into the dir rvm probably asked you if you want to trust the rvmc file. Do so. Now set up the app:
gem install bundler
rbenv rehash # If you use rbenv, rehash to be able to run the bundle command
bundle install
rake install
Now we are going to clone your original master branch into the _deploy
dir. Yes this is kind of weird. The outer directory will be on the source
branch and the _deploy
dir will be on the master
branch.
git clone https://github.com/ryber/ryber.github.com.git _deploy
This is the hard part. Copy your content over from the /_deploy
directory to the /source
directory. This is not going to be an exact science. Take a look at what you've got and migrate as neccessary. Your milage may vary.
You can probably get 90% of what you need with these two:
cp _deploy/_posts source/posts
cp _deploy/assets source/assets
The two config files are a bit different. You can't just copy the boostrap file over so open them both and copy the individual settings over that you need.
Run the site and check it out. Make sure everything is what you want it to be. When you do a preview
Jekyll will place your files in the public
directory (which is also ignored by git)
rake preview
Before you do this check the Octopress rake file and change the deploy_default
method to "push". You will find it in a section at the top that looks like this;
## -- Rsync Deploy config -- ##
# Be sure your public key is listed in your server's ~/.ssh/authorized_keys file
ssh_user = "user@domain.com"
ssh_port = "22"
document_root = "~/website.com/"
rsync_delete = true
rsync_args = "" # Any extra arguments to pass to rsync
deploy_default = "push"
Now we are going to replace your old site with the new site (at last!)
We need to clear out all of the files on the master branch to make way for the new content.
cd _deploy
git rm -r *
git commit -m "cleaning house"
cd ..
rake deploy
When you do the deploy it's going to:
_deploy
directory (your master branch)Last don't forget to push your source branch to github!
git push origin source
]]>
The first tech company I worked for (a start-up), folded when the founder’s mom pulled our funding. I had a job less than a week later. The second company I worked for was bought by a competitor. Even though I was not let go myself, my inbox and voicemail were flooded the day it was made public. I was able to leisurely browse and consider offers from all over town. Many with friends who wooed me with lunch and booze. Everyone was hiring. My co-workers who were let go all had jobs within a month. Most of the rest of the staff quit for greener pastures over the following six months. This was during the middle of the worst economic collapse in 80 years.
We need more programmers. The competition is fierce. I spend more than a bit of time for work just recruiting and I can tell you that it’s damn hard. Even when you have good salaries, good benefits, cool technology, and the right company culture, finding people is always hard. It’s not unique to Des Moines either. The same story is true all over the country from northern Virginia to Silicon Valley, if you are even a remotely talented programmer you can make very good money at a pretty low stress job where creativity is richly rewarded.
It’s not going to stop. Companies are finding they can’t just offshore their key products. The quality is just not there because good software requires good communication and that can’t happen when the developers are half a world away. Even if the quality was there we are horribly short of talented developers in India and elsewhere. Think of everything you own that has a computer in it. You phone, your car, your TV. We are painfully short of programmers and the shortage is scheduled to last for the next forever.
If you are a parent and wondering what kinds of jobs you should encourage your children to follow you would do well to expose them to software development. It’s not for everyone; but it’s also not looked at seriously by enough kids.
When I was a kid I would turn on my TI 99/4a and my only option was to program. There was nothing else. Today you have to go out and dig a little more. Still there are some great learning opportunities. Here are a few:
P.S. When I say "kids" I don't mean "boys". We are even more horribly short on girls.
]]>In Greek mythology there is a character named Procrustes. Now old “Crusty” as I like to call him would invite people passing by his place to stay the night. He had a bed for guests and he would get quite upset if they didn’t fit the bed perfectly. So he would make sure they would by either stretching them if they were too short or chopping off their legs if they were too tall. Eventually Theseus stopped by and fitted Crusty to his own bed.
Although it’s not a widely used metaphor today, writers have been using “Procrustean Bed” ever since to describe an arbitrary standard to which reality must be fit. Development sprints are often a Procrustean Bed. We place our stories into them and they tend to expand or contract to meet the time requirement. The very act of saying “this must be done in x time” seems to make the thing take x time. If it’s a simple task the developers tend to buffer it with all kinds of other things (needed or not). If it’s too big to fit then corners get cut.
I much prefer to just do a queue-pull method and concentrate on one thing at a time and letting that thing take the time it needs. I find that the shorter tasks take less time, the longer ones are done right and the team is more honest with the product owners and themselves about how things are going.
]]>We have 3 TV’s. Two on the main level. The one in the “Front Room” is the primary entertainment TV where we do the majority of our TV and movie watching. There is another in the “Family Room” which is mostly the kids (I have 4 girls 10, 6, 3, 3). The third TV is in our bedroom on the second floor. All of the TV’s are modern flat screens that are digital ready.
###Antenna: I went through several rounds of antennas before getting the right one. None of the smaller indoor ones would work. I ended up getting [this one] (http://goo.gl/0sD5Q) from Best Buy. It was on one of those tables of things that had been opened and returned so I think I got it for $90. I installed it in the attic. Lucky for me some former owner of my house ran a huge number of phone wires from the basement up to the attic...except they weren’t connected to anything. So I tied the coax to the end in the basement and then pulled it up into the attic. From the basement I ran the coax into a amplification splitter like [this one] (http://goo.gl/1opPD). Then I ran coax through the ceiling and up the floor close to air return registers. To get to the second floor bedroom I just reversed the line on the outside of the house that had previously been used by the satellite. I could have another splitter in the attic and send it down the wall and into a cable TV port that was never used but that’s for a future project.
Overall reception is pretty good. I’m getting all the major networks plus their ”extra” channels. The picture seems better than it did with satellite and they don’t go out as much during storms.
One of the main things we used our DirectTV box for was DVRing shows. We realized that most of what we were DVRing were broadcast network shows and the few other things we were recording, (like Breaking Bad and The Walking Dead) were available the day after the broadcast on Amazon for a couple of bucks. We still wanted a DVR for broadcast and we also wanted something that would play purchased shows from Amazon or Apple. Since AppleTv does not yet have DVR abilities we got a Tivo.
I have not been blown over by the Tivo. Supposedly this version was to be the magic device that brought together TV with all of the different online services. Yet right off the bat I had to buy a separate wireless card which seemed awkward in 2012. Then most of the online apps (Netflix, Hulu etc) are noticeably slower and crappier than their Roku counterparts. Still the DVR part is quite nice and it’s cool that it can download entire shows from Amazon. Boxee was not around here when I made the switch (or at least I was completely unaware of it). So I’d advise anyone to look at it before settling on a Tivo.
For the family room we get Netflix and Hulu through the Nintendo Wii and in our bedroom we have a Roku. I have to say I really like the Roku. It’s super fast and easy and has by far the best experience of the group. If they would just throw a big old hard drive in there I would replace the Tivo in a heartbeat.
For online services we have Netflix, Hulu and Amazon Prime. We don’t really watch Amazon Prime at all. My wife does quite a bit of our shopping from Amazon and we get prime as part of that but I think I’ve logged into it once. We do get Amazon season passes to our favorite AMC shows but that’s a separate service and they will only download to the Tivo. We also don’t use Hulu a great deal. We might get rid of that if we don’t start watching it more. Netflix streaming on the other hand is the bees knees. We love it and watch it all the time. It’s the secret sauce to the entire thing. The recent addition of Disney content only makes it better.
Overall I can easily say that we really don’t miss Satellite/Cable at all. My only regret is that we didn’t do it sooner. I highly recommend it. Still I would like to petition the powers at be for the following:
I’ve had an idea for some time that goes something like this. Lets say you have this C# method:
public void DoSomething(HttpRequest request){
request.Params ...
}
Oh Noes! the dreaded HttpRequest class. So full of sealed horribleness; and all we want is the stupid Params. If I want to be able to test this I can either go through the annoyingly complex process of building a HttpRequest object or I can try and swap out HttpRequest for Microsofts wrapper abstraction (or my own). None of that is nice and in the end I will have a bunch of code I don’t need or want.
Wouldn’t it be great if I could just add a interface to HttpRequest? It would solve most of my problems. It would be mockable and could define just the parts I need. Unfortunately I can’t break into Redmond and add an interface to that class.
But why not? Compilers are fast and smart and can figure out all kinds of things. Let say I made this interface for my method:
public interface IHttpRequest { Params {get;set;}}
I don’t see any reason the compiler (and IDE’s) could not look at the requested interface, look at HttpRequest and say “yep, that works.” It would STILL be type safe. It would STILL happen at compile time. It would NOT require anything to happen at runtime, and it would NOT be the same as duck typing because the object could not be just anything that (might) fulfill the request at runtime. The compiler would simply make a shorthand reference the first time it sees that HttpRequest implements IHttpRequest in the context of the package/assembly.
Maybe for speed there would have to be some kind of keyword on the interface or the param? Maybe not. Hey Anders or whoever is in charge at Oracle…give me a call, we can work it out.
Can anyone verify if there are static languages that do this? I have a hunch that Scala’s “traits” are somewhat like this but I’m not sure.
]]>The programming world has been preoccupied with CS vs programmers the last week. I wanted to weigh in on an important point that I don’t think has been made.
Almost anyone can write working software.
Some people can write very efficient software (you can easily make the case that CS helps with this.)
In the world of business programming the most desirable trait for code is that it reads well and that other HUMANS can understand it and work with it. A very elegant, program that can solve abstract problems doesn’t mean anything if other programmers can’t grok how to use it. Once more, code that is easy to understand and read is often also efficient and working.
If I were to create a new major for programmers I think I would put it in the business school. Not with engineering or mathematics. It would center on how to communicate (with humans) through code. How to work with a business to determine requirements. How to make money. It would have required courses in TDD, BDD, CI, agile processes, graphic design, speech communications, and yes, a lot of CS. Most importantly it would have lots of labs where students must make working programs together.
I have been hiring programmers for over 10 years. My impression of recent CS grads is that they have only 1/3rd of the skills I really want. I do think a CS degree is a great start to a career in IT but we really need the universities to give us something a little different.
]]>What came out of the retro was a great idea: Pair Programming Bingo. It works like this:
Each team has a “Bingo Board” listing all team members along the top and the sides. You get to mark a square once you have paired with another team member for at least a morning or afternoon session. We also have a column for “outside” meaning any member of a different team. members who get “bingo” by having a complete line get a prize. Teams that get a blockout get even bigger prizes. One idea is to line the bingo maps up and make it into a kind of competition.
and yes we realize you would only need ½ of a chart but we decided to keep both sides to represent who was the driver and who was the navigator. Anything to encourage more pairing!
More to come as it evolves
]]>Dead-Parrot Dead: This is the easy stuff. The class or method that is never invoked. The library that’s only imported but never used. This kind of code is easy to remove, it’s very low risk. Don’t listen to the people who tell you that the code is just resting or stunned. It’s kicked the bucket, shuffled off it’s mortal coil, run down the curtain and joined the bleedin’ choir invisibile!! THIS IS EX-CODE!! Clean up the body. Kill Satisfaction: 1 Zombie head
Ghost Code: Ghost code is actually the most common. You probably have it all over your code base and you don’t even know it. I’ve known developers who have spent their entire careers on projects writing code nobody asked for. Unless you can tie code to a specific business case and it’s bringing value right now (NOT “maybe someday”) then all it’s doing is getting in the way and sucking away your time. Exorcise it now and put it out of it’s misery. Kill Satisfaction: 3 Zombie Heads
Zombie Code: A more subtle form of dead code is zombie code. Code that looks alive but actually wants to eat your brain. This is code that is unreachable due to various reasons. Perhaps its related to a particular entry in a config file that never has a different value. It can also be spotted by a tell tale magic bool being passed to a method which is only ever called with “true” or “false”. At worst the code is strung throughout complex classes and methods that are only used in one particular way with limited expectations. If a developer ever tells you his code is “flexible” be wary, it might be a zombie.
These kind of scenarios can be a little harder to dig out, but often have a single kill point. Once you shoot it in the head it leads to an avalanche of deleted code. Kill Satisfaction: 6 heads.
Vampire Frameworks: Frameworks are pretty, they solve all of your problems and their perfect 19 year old bodies sparkle while they seduce you with their smoldering eyes. Don’t be fooled though! Any frameworks that forces you to generate boilerplate after boilerplate that you don’t find useful (or understand) is pure eeeevil. Even worse are the ones that generate these boilerplate classes themselves and inject their unholy poison all over your app. They suck away your flexibility, your ability to test and your ability to be lightweight. They often are quite good at doing something the way they think you should do it but as soon as you need to do something different (about day 3 in) they make your life a living hell.
Once established, killing off a framework can be quite hard. You need to stop them as early as possible. Kill Satisfaction: 10 heads.
]]>Like nature, code in a large project with many developers undergoes Darwinian pressures of natural selection. If you write truly great and clean code. If the purpose is obvious. If there are simple, easy to understand examples of it’s use in both production code and tests then your code will grow and get used. Other developers will start to use it as a pattern, they will use your classes in unexpected and surprising ways counter to your original design. Code that is ugly, hard to understand and use, or that does not provide benefit over other code (even new code) will not get used and eventually will be killed off. If you are an “architect” or “tech lead” the most damage you can do to a project is to interfere with natural selection and force other developers to do things a certain way. Particularly when the classes you wrote suck.
Forcing people to use your magical “flexible” framework will only prolong hardships in your app. Despite your best attempts, new mini-frameworks will crop up like weeds as developers either try to get around your bad code or simply don’t understand that it “already does that”. The fact is if your code had been good to begin with people would have happily extended and used it.
So don’t worry when nobody is using the divine classes you spent so much time on. Figure out why, make improvements, compete. Developers are like water and will always follow the path of least resistance. Make your code that path. Make it the yellow freakin’ brick road. Encourage your own bad code to die, kill off others without worrying about upsetting them. It’s for the greater good after all. Most of all don’t fall in love with your code. It’s not long for this world.
]]>var
Yes, var, it seems like such a little thing, such a minor feature, but it makes refactoring so much easier. Take this statement:
var foo = someObj.GetFoo();
Note how nowhere in this statement does it explicitly say what foo is. It’s still statically typed because the compiler can infer type from GetFoo’s return. Some people might think that’s a problem but we have modern IDE’s so it’s really no big deal.
The power comes when I want to refactor GetFoo, now as long as whatever it returns has the same signature as the original everything is OK and I never have to touch this file. I might be introducing a interface, or a abstract class or even completely replacing it with some other implementation. It matters not, all that matters is that my change had the smallest impact possible.
In Java 7 they are introducing some generics stuff where you don’t have to state the type twice. So instead of
Map<String,String> foo = new Map<string,string>();
you can do
Map<String,String> foo = new Map<>();
This completely misses the point of type inference. All it does is save me some keystokes but it does little to assist future refactorings. The fact that Sun/Oracle spent time on this rather than proper inference features is mind boggling and almost insulting.
P.S. someone has made a library to attempt this. I can’t speak for how well it works or it’s impact as I have not yet used it. I suspect that for type inference to really work well it needs to be baked into the compiler.
]]>What I’ve found is a party that all the cool kids left hours ago. I don’t know if it’s the oracle takeover or if it started before that but the whole scene just feels sad and lonely. The recent announcement of the features in Java 7 adds to it.
7 can easily be summed up as the programmers version of Ralph’s present from his aunt in “A Christmas Story”. We wanted lambdas and all we got we got was strings in case statements. Seriously, they should have just snuck the strings-in-cases thing in without pointing it out because everyone is just making fun of the fact that it took until version 7 to get it.
Oh well, I guess the Scala party down the road is where everyone went. I hear they have a keg…and closures.
]]>Eleven years is a lifetime in the tech world to be somewhere. I often tell people that in reality Geo was at least three different companies over my tenure. There was the early cowboy hacker startup phase; the professional services “we’ll customize anything for anyone” middle phase; and finally the SASS app agile/TDD rock star halcyon days that ended with our eventual acquisition. It really was the kind of place where it was what you made of it. You could learn a lot, work on interesting projects, improve the product on your own initiative, and interact with some of the best peers in town. That’s not to say it was all wine and roses but overall people with the right attitude and a little patience could go quite far.
The key was, the company was never satisfied with itself. It was constantly experimenting and changing and had great courage to make leaps other companies would never have considered. Sometimes we failed epically, but failure was OK as long as you learned. In the end, that’s one of the best things you can find in a company.
I really owe my career to the people I worked with at Geo. I don’t know where my career would have taken me elsewhere but I’m sure it would not have been as good. So a big thanks to Frank and the executive team for creating a company environment where IT was allowed to be IT; and a huge thank you to all of my fellow developers, you guys are truly rock stars.
As to why I am leaving. Let’s just say that the new company is not GeoLearning.
]]>How it got this way could be a good masters thesis on the dangers of waterfall and cramming every possible requirement into a bloated spec at the begining, but regardless of that, the team had a problem.
Extra and unnecessary code made building slow, made performance slow, and made testing slow and very difficult. It was confusing for developers to to have to deal with and it wasted all kinds of time with rabbit holes and marathon sessions in the debugger. Worst of all there were few tests to document the behavior.
Finally after a particularly difficult weekend the team had had enough. We made some time, got out the machete and started to hack away at the dead flesh. The result was a faster, less confusing, less buggy system that performed all of the same duties as the original app. Dev velocity went up as build times and time in the debugger went down. Occasionally there would be second guessing, “What if we need that some day?”, “Well, that way IS more flexible”, but you know what? That’s what version control is for. To this day I have yet to go hunting back in time looking for some of that dead code.
The other result was legend…”The Big Book of Dead Code”. The more and more code I hacked away the angrier I got. I watched as developers I knew from the R&D team disappeared like they had never existed. It was never their fault that they were asked to write something the product never needed to begin with.
We needed to be able to show to management how much waste a gigantic 2 year waterfall project produced. So I took the diffs and wrote a little script (called “Bring Out Yer Dead”) that took the deleted code, removed all of the spaces, tabs and line breaks, formatted everything into a sigle block of raw text and then printed it all out in a 9pt font front and back. As the code was removed the book grew.
It ended up being easily over 500 pages. 500 pages of blood, sweat, yak hair and money. The book became famous, people would come from far and wide to gaze in wonder. I never saw the look on the owners faces when they were shown it, but I was told that it was very sobering. It ended up becoming a symbol of development black holes. Never again would management tolerate non-incremental development and gold plated specs. We would deliver small bits quickly, we would adjust requirements as needed. We would do only what needed to be done. Keep it simple! Yagni! “Bring out Yer Dead!”
]]>He challenged me to ” please see if you can get a smallish project, maybe 1K unit tests (ignore other tests) to build and run in <45s & blog it!!”
Well, lately I’ve been working on setting up a integration test suite for Ninject, so I was familiar with it as not just a small, fast C# library, But one that is quite popular.
So here is the result. For just the build/test of the core project total time from the command line is between 4-5 seconds. 2 of that is running the 223 unit and integration tests.
For a total CI package/build including generating packages for .Net 2.0, 3.5, 4.0, 3.5 compact, and silverlight 2, 3 and 4 which adds up to 669 tests takes around 1:45 - 2 minutes.
So I’d say that shows that .net is at least capable of having fast cycle times. The thing that’s still missing are good autotest tools. IntelliJ and Eclipse smoke VS in terms of testing and refactoring. ReSharper…as good as it is…does nothing for the rest of the bloat in studio.
Now, if only we could get JetBrains to make a ReSharper for MonoDevelop we would be all set.
]]>