| 
News
Recent project: PDS Trader PDF Print E-mail
News - Web Development
Written by Tim Black   
Thursday, 07 November 2019 12:15
 

The majority of my web development work for the last 2 1/2 years went into maintaining and developing new features for PDS Trader, a platform for finding and planning the best option trades.  PDS Trader is made by Quantum Trading Technologies (QTT).  At the center of PDS Trader is its Portfolio Builder interface, where you can layer option trades ("legs") to make their net profit be positive regardless of whether the underlying market moves up or down.  The profit and loss of each leg is plotted on a chart (made with Highcharts), and their net profit or loss is displayed as one line on the chart.  By default, only that net profit line displays.

In the video below, QTT owner Ryan Jones demonstrates the central Portfolio Builder interface on which I worked.

 

 
Last Updated on Thursday, 07 November 2019 12:27
 
Interview questions, and answers! PDF Print E-mail
News - Web Development
Written by Tim Black   
Tuesday, 12 March 2019 22:47

I'm looking for a new job as a full-stack web developer, so thought it might be useful to me and my potential interviewers to share the answers I wrote recently to some questions I was asked in an interview.  Happy reading!

What is important to you in a career?

I want to use my gifts to serve other people and provide for my family, and do the best job I can at that.

Have you ever worked remotely? If so, what challenges did you face and how did you overcome them?

Yes; nearly all of my 19 years of web development work has been done remotely. One challenge I faced was that a client asked for way more work than I could perform, and when I could not complete all of that work, he refused to pay for the work which I had actually completed! I had to let that client go. I learned a lot from that, but especially to communicate with a client early, often, and specifically about the client's needs and the project's progress. In my current job, we use Slack all day, occasional phone calls and screen sharing, Trello for issue tracking, and (rare!) lunches at a restaurant, and this regular communication helps us get things done. Most of the time I'm free to work without distraction, and we have open lines of communication to get help when we need it. For me, this arrangement has been ideal.

Describe the problem solving methodology you would use if asked to implement a new feature in an existing codebase.

Here's how it works in my current job. Typically I will get specific instructions from the product designer describing the new feature, sometimes with screenshots of how it should look. When something isn't described clearly enough or it doesn't quite make sense, I work with the designer to nail down the specifics so I'm sure I've got it right. If the feature is really large in scope, I document the feature's design first, starting with the intended use case, then the UI components & functionality, then the data model or database schema, then any new modules of code needed, then any general API surfaces that are needed. All documentation is versioned with the code.

Then I move to implementing the code. I start by making a new feature branch using git flow. I add the feature branch's name to the Trello card. If I don't already know where the code needs to be modified, to find the right location at which to begin modifying the code, I often start from the user's first point of interaction with the app, so I inspect the HTML source and trace the execution and the data back through events, variables, HTTP requests, functions, back to the database, until I find the location where the new feature can be implemented. I often then use a debugger to step through the execution to be sure of its flow and to know exactly what data is present. In the places where we are able to use unit tests, I write a unit test first. Then I outline the code changes I think need to be made in comments, then I write the code (including docblocks for methods), run it, see if the unit tests pass and the user interaction works right, and debug as necessary.

After implementing the code, I briefly document the changes I made in a git commit message (formatted for conventional-changelog), include the Trello card's URL in the commit message, rebase the feature branch on the develop branch, squash the feature branch if appropriate, and merge it to the develop branch. After testing the new feature and getting approval from the product designer, I merge the changes from the develop to the master branch and push them to production. Then I record any useful notes in the related Trello card (noting the relevant git commit hash; I like how Github automates that part) and move it to the "Deployed" list. That way, if we discover later that the problem isn't really solved, we have a paper trail to follow to fix it better in the future. In a hobby project I used TravisCI to push changes to master & production only when their unit tests passed, to implement a CI/CD deployment pipeline.

How have the SOLID principles influenced your code?

Most of my code has not used complex class hierarchies, and most of my time has been spent writing code that depends on base classes provided in frameworks and libraries, rather than implementing the base classes myself. So I haven't focused much on the whole domain where SOLID principles matter the most. But I have learned aspects of SOLID principles and practiced them as I've matured as a developer. That learning started when I read about extreme programming on the C2 wiki and elsewhere in the early 2000's. While I care to use many best practices (e.g., I aim to write clean, well-organized, maintainable, well-documented and self-documenting code), I also remain a pragmatic programmer, rather than a purist.

Single responsibility principle: I have followed the single responsibility principle more since I learned to write functions so that they can be more easily unit tested (functions that do only one thing are easier to test), and because "don't repeat yourself" is good advice most of the time.

Open/closed principle: I've extended base classes plenty in Backbone, TurboGears, Django, and CodeIgniter, and then overwritten the default methods they provide. Ever since learning how Prototype, Mootools, and other early JavaScript libraries modified global prototypes and so made themselves incompatible with each other, I've avoided monkey-patching base classes--who knows what havoc monkey-patching them would cause somewhere else in the application!

Liskov substitution principle: When I've extended a base class and overwritten the default methods it provides, I've generally assumed my custom methods should return the same types which are intended to be returned by the default methods, because I get that changing the return value to be different than what other programmers expect is not a nice thing to do! It's best to make code as easy to understand as possible, and avoid things that will confuse the next developer who reads the code.

Interface segregation principle: Earlier in my career I saved old code that was not used in case I wanted it again. Now I delete it and rely on git to resurrect it if it's needed again. The reason is that unused code is literally useless. It wastes developers' time and attention. I haven't written interfaces. But this same concern would motivate me to avoid writing interfaces that require subclasses to implement unused methods, and would motivate me to use other structures like multiple inheritance, mixins/traits, or libraries to implement only the functionality that is needed.

Dependency inversion principle: I don't think I've intentionally tried to make concretions depend only on abstractions.

Do you have experience with TDD? When is it helpful? When is it not?

Yes.

Test driven development is helpful throughout the whole software development lifecycle. Writing tests first does a good job of connecting an implementation to its user story, design and contract. Test driven development can help keep code and the coder focused on implementing the features that are actually needed (avoiding feature bloat) and promote better code organization, modularization, and separation of concerns, promoting its readability and maintainability. Future users can consult unit tests as a form of documentation-by-example to be sure they understand how each method is intended to be used. TDD provides the developer an immediate feedback loop which helps catch bugs early, immediately after they are created, before deploying to production, and this gives developers, management, and clients more confidence that the software does what it's supposed to do. TDD makes it easier to refactor with confidence you're not breaking something. TDD can save a significant amount of time because it makes it less necessary for a human to manually test the same features over and over. TDD is especially helpful for implementing continuous deployment and delivery, which makes it possible to deploy new features to production faster than some competitors in the same market. Tests can help isolate the problem when applications unexpectedly fail in production after they had been working fine for some time in production beforehand. That kind of failure really does happen(!), and it's challenging to solve those problems without tests, logs, and other forms of instrumentation. After a bug is found, tests can provide assurance the bug is fixed. The more mission-critical a piece of code is to the business, the more it's worth testing to be sure it's still working correctly.

But not everything needs to be tested. Tests aren't needed for exploratory code, temporary prototypes, one-off scripts, or 3rd party libraries, and complete test coverage is less necessary for parts of the code that are not central to the business logic. In practice, I care more to write unit tests for controller methods than for the UI or views. Code that requires lots of stubs and mocks to test can make testing more trouble than it's worth. In the short term, the time needed to write tests must be balanced with the speed at which new features need to be completed. Over the long term, tests should be viewed as a way to reduce the time needed for maintenance (bug-fixing) on the software. The short-term and long-term needs of the business need to be weighed to know how much time should be given to writing tests. Some legacy code cannot be tested easily; newer web application frameworks tend to be set up to be more easily testable.

On the whole, test driven development is more helpful than not.

What new technology have you explored recently and what did you like / dislike about it?

Polymer

I've used versions 0.5, 1, 2, 3 and the most recent pre-release version to make some hobby apps. Web components' encapsulation and composability are awesome; they are what I always wanted out of ExtJS, jQuery, Backbone and other frontend libraries. Their encouragement to just "use the platform" when you can is excellent; it is the future that's becoming the present, because the platform implements the W3C standards. For example, JavaScript now provides tagged template literals, which are an effective native replacement for JSX. Polymer's team has done a great job providing a stable upgrade path between major versions. Polymer never became popular, and its library-specific 3rd party tooling support isn't strong because it's bleeding-edge technology, but those down sides haven't bothered me much because Polymer is mostly W3C standards, which are undeniably popular and have excellent tooling support, frontend frameworks are beginning to use web components under the hood, and the Polymer team provides very well-designed starter kit apps to show how to put together a deployable app using the best current frontend tools.

Angular

I've used AngularJS (version 1) for the past 2 years at work, and am comfortable with it, but find applications made with it unnecessarily complex. One plus is that Angular's tooling support is quite strong.

In order to get comfortable with the newest versions of Angular, I have made some tutorial applications with Angular 7, and appreciate its use of more recent JavaScript and TypeScript features like modules, which make dependency injection a bit easier, and its use of components for organizing an app's functionality, but the application architecture it requires still seems more complex and proprietary than necessary. React, Vue and Polymer are easier to understand and use, which seems to me to be a very important advantage, because it can speed up onboarding new developers, can facilitate faster iterations and can better prevent the accumulation of legacy code which is hard to upgrade to newer versions of its dependencies, so ends up breaking.

Docker

That's exactly the problem my current employer needs to solve in the next couple years. I maintain and develop several CodeIgniter 2 applications which require PHP 5.6. In order to run PHP 5.6 on my Ubuntu 18.10 machine, I had previously used the ondrej/php PPA from deb.sury.org which permits installing several versions of PHP in parallel, but because PHP 5.6 is no longer supported (as of Jan. 1, 2019), in order to continue using it I had to hold back (not upgrade) some Ubuntu system packages, and eventually that broke a few other packages. The production servers' system packages will need to be upgraded to remain secure, and to upgrade the system packages, the apps need to be migrated to use PHP 7.x.

So to begin that transition, and prevent my local system from having broken packages, I've begun putting the apps in Docker containers. I like that Docker isolates PHP from my system packages! It will also let us upgrade each legacy PHP 5.6 application to a newer version of PHP individually as we have time, and until then, it will let us run the PHP 5.6 apps on upgraded and so secure servers, and make deployment even more stable than the automated deployment we already have. So I'm getting a Docker setup working to help my coworkers continue to maintain these apps after my contract ends.

This problem my current employer faces has shown me that companies should invest some time in upgrading their stack to newer software in order to prevent their software from becoming unsupported, insecure, broken, or incompatible with newer software, syntax, architectures, paradigms, etc. and the new efficiencies and so business value they can provide. Upgrading too often will slow down new feature development, but not upgrading often enough runs a real risk of the software ceasing to function as a whole.

Last Updated on Thursday, 14 March 2019 17:26
 
How to get out of crunch mode PDF Print E-mail
News - Web Development
Written by Tim Black   
Thursday, 04 August 2016 22:11

I did some serious reading and thinking about how I might be able to help a startup stay out of crunch mode, and came up with the following recommendations.

Getting out of crunch mode is a challenging transformation, because it requires changing a company's culture, business plan, and software development process, and these things are hard to change.

It is a matter of changing the company's culture. This has to start with the founders. You have to value a work-life balance.[1] One pointed way I saw this put was: you have to value your employees' bodies more than your profit. Benefits: 40 hour weeks improve employees' happiness and productivity[2], and enable the company to hire from a broader pool, including hiring not ONLY junior employees who are smart, productive, ambitious, and perhaps naive about the "churn and burn" process at some startups, but ALSO more senior employees who are domain experts (think Ph.D.s), stable, reliable professionals, and seek a better work-life balance out of wisdom and not merely necessity. You plan to expand your development team, and a broader candidate pool would make it easier to find candidates willing to relocate to [your non-Silicon Valley location].

It is a matter of changing your business plan. The release deadline pressure comes from selling what you don't have. You can transition to selling the features you have, and selling development services. This can be done gradually, and to a greater or lesser extent. Benefits: This reduces deadline pressure. Selling existing features to new customers also increases and diversifies your customer base, and you can leverage this portion of profit to fund new features. You can still contract to create new features for clients, but rather than guaranteeing specific features as the primary deliverables, guarantee your development services for a period of time as the primary deliverable, delivering new features frequently in an agile manner. This is a paradigm shift which could seem impossible to implement because some clients require feature deadlines, but for some clients faster iteration and constant feedback loops actually give them more needed features faster than they would get under feature deadlines.

It is a matter of changing the software development process and feature delivery architecture you use to implement your business plan. You can use branch-by-abstraction, put unfinished features (or alternative/optional/sales-tiered finished features) behind feature flags, implement continuous integration and continuous delivery[3], and so move to more frequent releases and a rolling release cycle. Continuous delivery and frequent releases are harder with software as complex as operating systems[4], but still possible. Benefits: Faster development velocity, shorter time-to-market for new features, so quicker response to competition, and less pressure on release day, because every day is a release day in that stable release candidates are built every day[3]. Some companies say they could not compete in today's market without using continuous delivery.

Footnotes:

1. This is the shortest and most blunt article I read which could motivate a founder to change his priorities: http://chadfowler.com/2014/01/22/the-crunch-mode-antipattern.html.

2. Surprisingly, Henry Ford found reducing from 6 to 5 10-hour days actually made his workers produce more per week, and a further reduction to 5 8-hour days brought a further increase in production. This was with assembly line workers; arguably knowledge workers' productivity increases similarly when they are not tired. This and much other related research is mentioned in the paper responding to crunch mode at Electronic Arts at
http://cs.stanford.edu/people/eroberts/cs181/projects/2004-05/crunchmode/index.html. This paper is the best collection of material I found which makes the point that it's best to avoid crunch mode. http://www.igda.org/?page=crunchsixlessons is a good presentation, too.

3. How one company transitioned to continuous delivery: https://www.infoq.com/articles/cd-benefits-challenges. "The engineers commented that they don't feel the same level of stress on the release day that they did previously. That day becomes just another normal day."

4. Startup advisor Jocelyn Goldfein wrote that operating systems require a more regular and probably longer release cycle at http://firstround.com/review/the-right-way-to-ship-software/.

Last Updated on Friday, 28 June 2019 17:06
 
How should individual Christians give handouts to the needy? PDF Print E-mail
News - Theology
Written by Tim Black   
Tuesday, 02 February 2016 11:47

A friend shared some experiences and asked me effectively,

"How should individual Christians give handouts to the needy?"

I'm thinking about what to say in response to your questions. Lots of thoughts come to mind. Here are some of them:

1. God freely justifies, so we should freely give the first time someone asks. God also sanctifies, so we should only continue helping if the needy person is willing to obey God's commands, and work to get the help they need. You have to be willing to say things like, "This is a free gift because God forgives those who repent of their sins and believe in Christ, free of charge," and "Friend, until you put in that job application I helped you get and you promised you'd fill out, I'm not going to help you further." Sometimes how they respond to such requirements will show you their true colors.

2. Don't give money to needy people; give food, clothing, buy their bus ticket. So many spend the money on drugs, alcohol, cigarettes. If they want gas for a long trip, consider taking them by the police station first to be sure they're not on the run. If they're going on a long trip, they have the time.

3. Ask, "Do you go to a church?" If they say "Yes," ask which one, and send (or offer to take) them there (say "I'll help you, through your church.") If they say "No," ask "Why not?"

4. Instead of handouts, give money to your local church first, then a homeless shelter or some other such organization which is well-equipped to help, and is integrated into that locale. They have the ability to hold people accountable to change their lives, by the strength God provides through the gospel and practical counseling. Tell the needy, "I give through my church (or X homeless shelter). Come there with me and we'll help you out. On the way, let me tell you what God has done for me. Have you ever committed a sin?" Are you going to lead them, or are they going to lead you?

5. Bankrupt people can't keep all their promises, however sincere they are. They can't. They don't have the resources.

6. I'd hardly trust anything or anyone in Las Vegas. I don't have to tell you that, but maybe it bears repeating. I want to believe needy people's stories, and in a sense I do (I take them at their word), but I don't trust anything a needy person says. I trust solid evidence from two or more independent sources.

Also regarding Las Vegas, I don't think it's right to replace 1) gambling with money with 2) buying needy people's ears for the gospel with money. This is a subtle matter of priorities in your own heart, which may not actually change what you do on the outside. We should want to help people with money, and their greater need is for the gospel. Our intent should never be to bait and switch, but to address the person's true needs as a whole. There is a temptation in which needy people place me and other Christians, to substitute diaconal aid for gospel ministry. A needy person's request is an opportunity to share the word, as well as to help that person.

7. Whenever I pray with a needy person, I ask God to forgive their and my sins for Christ's sake, and to help us follow Christ as our Savior and Lord, for our good.

Last Updated on Tuesday, 30 August 2016 12:05
 
Starlight and the Age of the Universe PDF Print E-mail
News - Theology
Written by Tim Black   
Friday, 04 December 2015 17:26

A friend asked me how I reconcile the Bible’s apparent teaching that the universe is young with star light’s indication that the universe may be very old. We understand that stars are millions or billions of light-years away from us. I replied as follows with the resources I have.

You might find the following article interesting in connection with considering the age of the universe.

DeRemer, Frank, Mark Amunrud, and Delmar Dobberpuhl. “Days 1-4.” Journal of Creation 21, no. 3 (2007): 69–76.  Available at http://creation.com/images/pdfs/tj/j21_3/j21_3_69-76.pdf.

Note the following quote from that article:

"God made (not created) the expanse (v. 7a). From what did He make it? The form ‘expanse of the heavens’ may indicate the ‘what’, for it is used four times (vv. 14, 15, 17, 20) even after God called it ‘heavens’. Thus, ‘expanse of the heavens’ suggests ‘the expanded form of the (original) heavens’. That sounds like God started with the original heavens of v. 1—the substance or fabric from which to make finished heavens—and expanded or stretched them out to make places for the luminaries (space).

Thus, ‘the expanse of the heavens’ seems to be the stretched-out form of the original heavens. Confirmations are found in Scriptures written later, if stretching is identified with expanding. Job 9:8, Is. 40:22, Is. 51:13, Jer. 10:12b=51:15b, Zech. 12:1, ‘Who/He (alone) stretches (-ed) out the heavens’. Is. 42:5, He ‘created the heavens and stretched them out’ (created and made). Is 42:12, Is. 48:13, add the anthropomorphism: ‘...with His hands/My right hand...’. Ps. 104:2b, ‘stretching out the heavens like a tent curtain’. Some take such stretching as metaphorical, but equating ‘expanding’ with ‘stretching’ obviates any reason to do so and makes good sense."

My basic thought which might be useful to you is this:  if God stretched out space, He may well have stretched out the star light within that space at the same time, ending with his fixing the locations of the stars (and so ceasing His work of stretching out the "expanse"?) on day 4.  I don't think this provides a comprehensive answer to your question, but I find it satisfies my curiosity sufficiently, and on biblical grounds.  The article's authors think in a similar way in regard to day 2, before the stars were made:

"God’s separating the matter droplets so far from each other caused their light to dim or go out temporarily, for a second night time. It also stretched out the first light in the universe, resulting in low-frequency background radiation. Hence, this second night was not utterly devoid of light, as was the first, but it was relatively dark as ours are now." (p. 74)

I referenced this article a couple times in my sermons on Genesis 1-3 at http://www.alwaysreformed.com/publicdocs/papers/Sermons%20on%20Genesis,%20by%20Tim%20Black.pdf, notably, on p. 235 in the context of critiquing the Framework view from the perspective of the 24 hour view of the days of creation.

As I stated there, one of the authors expanded on the article above in the following book:

Dobberpuhl, Delmar. The First Four Days:  The Creation of the Universe:  an Annotated Account. WinePress Publishing, 2012. https://books.google.com/books?id=ewwKPs2ROSIC....

Note pp. 157ff, which deal with your question:  https://books.google.com/books?id=ewwKPs2ROSIC....

Other pages also deal with the issue; search for the word "starlight."

An explanation by Dobberpuhl similar to the article above is at http://www.ldolphin.org/cid.html.  Note the following quotes from that article:

"The physical concept just described includes all these smaller blobs forming their own gravity wells then being separated from each other by expanses governed by gravity. There are 13 references in the remainder of the bible confirming that God stretched (Job 9:8, Psa.104:2, Isa.40:22, 42:5, 44:24, 45:12, 51:13, Jer.10:12, 51:15, Zec.12:1) the heavens or spread out (Job 26:7, 37:18, Isa.48:13) the heavens and/or the earth [5]. The stretching implies the expanding of the gravitational fields between the masses (blobs) as they are spread throughout the universe."

"The setting of the luminaries could explicitly refer to the positioning (including relativity and time dilation) of all the heavenly objects in their time and space and limiting their movement with respect to Earth. Most likely it also refers to the stopping of the stretching. Job 37:18 in the NIV translation captures both these concepts in one verse. Other references in the Bible confirm God's act of setting the luminaries in their locations in the sky (e.g. Psa.8:3, 148:6, Pro. 3:19, 8:27, Isa.51:16)."

Russell Humphreys wrote the following book which seeks to directly answer your question:

Humphreys, D.R., Starlight and Time: Solving the Puzzle of Distant Starlight in a Young Universe, Master Books, Colorado Springs, CO, p. 53, 1994.

DeRemer, Dobberpuhl, and Amunrud replied to Russel Humphreys' response to their article at https://creation.com/images/pdfs/tj/j22_1/j22_1_56-58.pdf.  Humphreys advocated the view that time dilation is the explanation for light coming from distant stars in a young universe - that is, "young" from the perspective of earth, because according to the theory of time dilation, time has not moved at the same rate from the perspective of every location in the universe.  So far as I have read, it appears to me that the primary evidence for the theory of time dilation has been given a much simpler explanation, leaving the theory of time dilation without convincing evidence (see https://en.wikipedia.org/wiki/Russell_Humphreys#New_Cosmologyhttps://en.wikipedia.org/wiki/Pioneer_anomaly).  Nevertheless, you may find Humphrey's book useful, because it attempts to directly answer your question.

This appears to be an attempt at a serious critique of Humphreys' theory: Conner, Samuel R., and Don N. Page. “Starlight and Time Is the Big Bang.” CEN Tech. J 12, no. 2 (1998): 174ø e194.  Available at http://www.trueorigin.org/rh_connpage1.pdf.

Personally, I am inclined, because of scripture's statements that God "spread out" the "expanse" (which can mean something which has formerly undergone an action of being spread out; the article by DeRemer, et. al. led me to see this as significant), to think that God may have created each star's light on day 2, while He also was--as an act of extraordinary (not ordinary) providence--greatly expanding the universe, which would be a reason to consider that the stars' light may have traveled "faster" or "further" in proportion to the size of the universe than it does today, and if that is not the correct or full explanation, I am inclined to think that God could have created not only each star, but also the full extent of each star's light, on day 4, and it is possible God caused that light to travel "faster" or "further" on day 4 by an act of His extraordinary providence.

Last Updated on Tuesday, 23 February 2016 21:28
 
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>

Page 1 of 17