Thoughts Ahead of the Q1 2023 FINOS Board Meeting

The following views are personal and based on my experience working in and around open source and the financial services industry the last five years. They do not represent the views of any employer or client of mine, past or present.

The Fintech Open Source Foundation (FINOS) quarterly board meeting is this week.

Below are six suggestions for the board, FINOS community, and wider community of developers, product managers, program leads, etc. working at the intersection of financial services and open source.

They are:

More detail about each of these is below if you’d like to grab a cup of coffee and dig in …

Identify Strategic Themes and Double-Down on a Critical Few Set of Projects

FINOS is an industry (i.e., financial services) focused foundation. While there are open source projects that have grown out of financial services companies to have broad cross-industry applicability – pandas, developed by Wes McKinney when he was at AQR and which he continues to maintain today, and Eclipse Collections, originally created by Don Raab, now at BNY Mellon, being two such examples – a principle utility of FINOS is the ability to connect product teams working on common industry-specific use cases to collaborate through open source methods and code.

I recommend that the board deliberate to identify a targeted set of themes – 3 would be ideal, 5 at most – of shared strategic criticality and which represent differentiated sets of use cases within financial services. This would be a fundamentally much more lightweight structure than the old programs model from the FINOS early days or even the categorization on the landscape. Rather this would be more a statement of a few top priority areas, for which open source has utility, that resonate to the CIO/CTO level of FINOS members.

Ideally, in each of these areas there should be 1-2 FINOS projects already incubating or active, around which the board and wider community would further rally (more on that in a second). In the case of themes where FINOS does not have a current offering, rather than start something from scratch, with an empty repo, which has had mixed success, I’d suggest looking for other projects in the Linux Foundation and wider open source community around upon which to build financial services specific extensions and features.

Here are some candidate themes, and potential related flagship projects

  • Big Data, Data Engineering, and AI/ML
    • Legend
  • Next-generation Financial Desktops
    • Perspective
    • FDC3
  • Trading Technologies, Front to Back, and (System-to-system) interop/exchange
    • Financial Objects and security reference data projects
    • ISDA CDM work
    • Morphir
    • MessageML, Symphony WDK, and similar projects
  • Payments
  • Blockchain and Tokenization
    While there is some applause for the relative lack of blockchain activity happening in FINOS, there is also lots of interesting blockchain work happening right now in the industry around areas such as fixed income issuance and settlement. CEOs of companies like BlackRock are talking publicly about tokenization’s utility. FINOS took a run at a Distributed Ledger Technology (DLT) program in 2018 and it didn’t quite get traction, but the core technology has developed a lot since then, and it now has more in-production use cases, including and especially in financial services.
  • Regulatory and Reporting
    How can open source and open standards help reduce the regulatory burden for financial services organizations, both financial reporting as well as reporting related to SBOMs, vulnerabilities, licenses, etc. for IT and engineering audits? Enterprises can and should think about how they streamline and standardize how they do financial reporting and engineering/SBOM/tech stack risk reporting for regulators and auditors.

As to the afore mentioned “rally”, this would take the form of:

  • The original contributors and/or current maintainers, with support from FINOS and the larger Linux Foundation as needed, re-commit resources to what are effectively product marketing roadmaps to further attract and retain consumers and contributors to these projects. Get the hygiene around these projects in great shape — documentation, public roadmaps, roadshow and conference presentation plans, support and mailing list response procedures, SDKs and reference implementations, etc.
  • In turn, and with an eye towards cross-pollination, each organization in FINOS should “pinky promise” to evaluate at least 3-4 projects in the FINOS catalog, with a goal that every member is consuming at least one project in FINOS that their own organization did not originally contribute. If this proves too hard – if each member can’t find at least one FINOS project that is useful enough for to them to implement and use – than a more fundamental question about the FINOS project portfolio, or even the value of FINOS hosting projects on its own at at all, should be asked, though I don’t think that’s where things are yet. As stated so well by friend of FINOS Jono Bacon Monday on LinkedIn, building a community around a software product (or catalog of projects) starts first with shared consumption.

The overall point is that we need to jumpstart the flywheel of cross-contribution via cross-consumption. This was a priority in FINOS early days but harder to do then with comparatively fewer members and projects. Now, under Gab and rest of the team’s leadership, FINOS has a broader set of both members and projects from which to build cross-interest in each other’s projects.

Build an Open Source Catalog to Demonstrate Commercial Value

Financial services professionals working to build great products in banks and other financial services organizations, but with with less background in open source, could use some help navigating what open source projects can be used for what and how. Building on the success of the project expo at OSFF, I think there would be value in a library of case studies and examples of how open source projects, at least those beyond the “household names” like Linux and K8S, have been put to use within financial services. Unlike the expo though, this catalog should not be limited to just FINOS or even Linux Foundation projects, but instead be any open source project that a financial services firm might use. Extra points if the case studies can include how consumers went on to become contributors.

Another way to implement this might not be as a catalog, which requires an initial critical mass of projects, but perhaps as a Yelp style project review where open source project consumers could easily share back their experiences and lessons learned when deploying a particular open source component, perhaps as overlay to an initial set of project data from as sources such as libraries.io.

Open source should be a way for financial services companies to accelerate product delivery roadmaps. A better way to share the pros and cons of open source packages, especially in a financial services context, would help product managers and engineers especially to make informed decisions, and in turn help organizations realize more commercial value from open source.

Convene a Working Group around AI

Everyone has heard about ChatGPT and if your social feeds are like mine, they are filled with posts about A/I.

I think the board should consider starting an AI/ML working group to take on the following topics:

  • Financial services specific considerations (e.g., IP considerations) when using AI-powered IDEs like GitHub co-pilot and ChatGPT infused repl.it.
  • Open source licensing and its applicability (or lack thereof) to AI/ML. To highlight just one specific issue, among several, what counts as “modification” in an ML model? Is setting the weights in neural networks enough to trigger the modification criteria if an ML model’s maintainers have adopted AGPL as its license (as some have done on Hugging Face)? It’s not clear that using open source licenses for ML models may not be a round peg in a square hole.
  • Underlying license and copyright issues related to underlying code bases and data sets on which AI/ML models are trained. Tools like The Stack built by bigcode on Hugging Face that allow one to search a code corpus for their own contributions are the types of tools and transparency of which we need more.
  • How tools like ChatGPT can be used to quickly build initial scaffolds and working prototypes of new projects for financial services, and what additional data sets could be used in combination.

Bring Product Managers to the Table

I am inclined to wonder if participation – i.e., having “a seat at the table” to the distributed product management at the core of open source that happens in GitHub issues and on working group calls – may not be at least as valuable, if not more so, to financial services companies than code contribution by engineers.

I think the community can and should do more to encourage participation by product managers in FINOS and similar efforts, as it is the product managers that likely have the clearest and most comprehensive view of end-user use cases. Product managers are well positioned to provide input on requirements, roadmaps, and features. Along these lines, there also opportunities to help bridge the gap between internal product management tools and processes with their corollary systems in open source communities (e.g., GitHub Issues coupled with a project’s governance model).

Build Business Cases, Benchmarks, and KPIs

Can more be done by open source practitioners, and the open source community overall, to shore up the business case for open source? (By “open source” I mean everything above and beyond passive open source consumption — i.e., “active” consumption by leveraging tooling like the OpenSSF criticality score, referenced below; participation by product mangers and engineers in working groups; code contribution; and, financial sponsorship in the form of foundation memberships and project grants).

Unless corporate leadership can be shown EDITDA (or FCF) measurable IRR and ROI models to rationalize investments in open source (as defined above), I think open source may increasingly find itself buffeted by the waves of economic cycles, especially as technologies like AI/ML (which is not mutually exclusive to open source, but may end up treated as a distinct set of programs for corporate planning purposes) become the new hotness attracting attention, sponsorship, and budget dollars.

And so, given there is only so much budget to go around even in the most well capitalized of corporations, organizations pursuing open source strategies could use help with:

  • Business model templates, preferably as IRR models
  • Categorized lists of concrete, tangible business benefits, preferably those that other companies publicly acknowledge having realized. (These cannot be theoretical or aspirational).
  • Guidance on how to set targets and key results goals.

Along the lines of benchmarks, and as I’ve shared in TODO Group and FINOS Open Source Readiness meetings previously, the industry and open source community could benefit from a set of common KPIs, specifically a scorecard with shared definitions with which an organization can benchmark itself to 1) its sector (e.g., capital markets) and competitors/peers, 2) the financial services industry overall (retail banking, capital markets, asset managers, data providers, etc), and 3) open source overall (for which the Open Source Contributor Index is effectively one version). I suggested a few such KPIs in a GitHub issue several months ago. Compiling these benchmarks might be done as part of the Open Source Maturity Model (benchmarks and measures being useful context to maturity rubrics) and/or State of Open Source in Financial Services annual report. I’m hopeful the CHAOSS community could be a huge help here too.

Here’s a starting point of a few KPIs (some of which, at least in part, exist in part on LFX Insights) that I think could be the basis for a useful benchmarks. Trend lines – Month over Month, Quarter over Quarter – coupled with the ability to compare one’s own organization with its sector, industry, and open source ecosystem overall, is what would make these even more useful to executives.

  • Active Contributor Ratio
    Active Contributors / Total Engineers (or potential contributors overall to include PMs, etc) in a given time period

    The first step to using this metric is a shared definition. One definition is the number of individuals who have proposed a pull request (which is a bundle of 1 to n many commits) in a given period. The Open Source Contributor Index (OSCI) by EPAM, by contrast, uses >10 discrete commits in a given time period.

    (Fictional) Example: A global asset manager has 15,000 engineers and 5,000 PMs, so a total addressable “market” of contributors of 20,000. 1,000 are active contributors to open source. Its active contribution rate is 5%.

  • Count of contributions in a time period made by a given organization, sector, industry
    # of contributions

    Similarly the first step here is to define what counts as contribution. Is it just code contributions? Or other valuable forms of contribution like raising a GitHub issue?

    Useful drill downs might include:
    • Products/projects to which contributions are being made by company, sector, industry
    • Foundations that host the projects to which contributions are being made
    • Technology or domain areas of contributions (E.g., data management, blockchain)
    • Use cases and business domains

  • Count of projects to which an organization, the sector, and the industry are actively contributing
    # of projects

    Potential drill downs:
    • % of projects to which contribution is being made that are in the Linux Foundation, FINOS, etc. Perhaps a pie chart of foundations that house the projects to which contribution is being made?
    • Ratio of number of projects to which an organization, sector, and industry is actively contributing that they also use in production over the total number of open source projects they use. This ratio would be useful to show how well aligned contribution is to overall open source consumption. This ratio could be further tweaked to include in the denominator just, say, the top 500 most used projects, which then help shows alignment of contribution to projects most used.
      (Total projects to which contribution is made – Projects to which contribution is made but that are not presently consumed in production) / Total open source projects/packages used in production
  • % of pull requests to a FINOS (or any open source) project during a time period
    • that are made by the original contributing organization (e.g., % of pull requests made to Waltz by DB)
    • that are made by other FINOS members other than the original contributing organization (e.g,. % of pull requests made to Waltz by any other FINOS member than DB)
    • that are made by other LF members other than the original contributing organizations
    • that are made by contributors from non-financial services organizations
  • Watcher, Star, and Fork counts across ..
    • Projects contributed (created and originated) …
      • by an org (e.g., Perspective, Quorum etc. for JPMC; GitProxy, Datahub, etc. for Citi, etc.)
      • by a sector (e.g., asset management)
      • by the industry
    • Projects to which an organization, sector, industry contributes
    • Projects an organization, sector, and industry consumes

While I doubt individual corporate performance will ever be public for, say, Morgan Stanley, to do a direct comp of any of these contribution metrics with, say, JPMC, I think it’s reasonable to expect executives who fund open source programs and associated foundation memberships to ask, and be able to get coherent answers to, questions which require industry context like:

  • Are we contributing at a higher or lower rate than the sector, industry, and overall cross-industry average?
  • How does our contribution activity overlap to the projects we most consume? How about for the sector and industry? (see discussion of interventions below)
  • How does our consumption and contribution map to the foundations to which we provide financial support?
  • How much contribution (traction) are we getting to projects we contributed from other FINOS members and wider industry participants? How about from top clients?
  • Where are other industry participants, especially our clients, focusing their contribution activity?
  • and an overarching, project discovery question, What open source projects are not on my radar that we should be looking at?

Just having a canonical top 10 or top 100 list of the open source packages most consumed by the financial services industry could be useful.

Bolster Connection Between Open Source Programs and Software Security and Supply Chain Programs

The criticality of open source package vulnerability detection and mitigation in open source continues to grow. Hence the 2021 White House Executive Order and the creation of OpenSSF.

As was suggested on Twitter Monday there can and should be more connective tissue between OpenSSF and the TODO Group, the latter being an incredible consortia of OSPOs in the Linux Foundation led by Ana Jimenez through which leading practices are shared about and among open source programs. I’d add FINOS to the mix. Why? Because open source programs, including and especially in financial services, should be well connected and supportive of software supply chain initiatives usually driven out of some combination of the CISO org, DevX, and CI/CD type groups. Additionally financial service firms have industry and regulatory specific requirements related to handling software security and performing incident disclosure.

The most concrete tie-in between open source consumption and usage security (“inbound”) with open source programs, which are often focused on contribution (“outbound”), is project and community health. Project health checks – which can include metrics such as PR review cycle time – are a useful early warning light that an open source project may have a statistically significant greater chance of containing heretofore unidentified critical (9.0+) vulnerabilities on the NVD scale. Recognizing their value in risk identification, project and community health metrics are being incorporated into the OpenSSF Criticality Score.

In addition to helping software security teams in banks to implement the OpenSSF criticality score among other forms of project health check reporting, open source program professionals are also well positioned to advise on the potential interventions a company might take when a particular project, especially one that’s commonly used or prevalent across transitive dependencies, starts to exceed specified risk thresholds. These interventions might include the following non-mutually exclusive actions:

  • New or increased financial support for an at-risk open source project through
    • Support of the underlying foundation if they are part of one (especially via directed giving if that is an option)
    • Maintainer grants
  • Increased code contribution by the firm’s own teams
  • Hiring independent developers, perhaps via programs such as Major League Hacking, to build new features and fix bugs in an identified at-risk project
  • Increased product direction and feedback by the firm’s own product managers and security professionals
  • Especially if a project is no longer actively maintained, or its maintainers are no longer responsive to PRs, hard fork the project
  • Evaluate alternatives, both open source and proprietary, that might take place of the at-risk project.

Evaluating the suitability and feasibility of these interventions is work open source programs should be well positioned to help CISO teams with, and an excellent way, in my view, for these programs to demonstrate further value.

Finally, providing all this information in an easily consumable way, with useful visualizations, to the CISO, CTO, and CIO levels of an organization, as well as any number of the business case metrics above, is itself a big area of improvement opportunity. For example, CISOs and CTOs should be able to readily call up a list of, say, the top 50 most used open source projects in their consolidated SBOM, with an overlay of 1) OpenSSF health score and 2) current enterprise engagement and support of these projects (i.e., the interventions above). Better still imagine bank leadership being able to call up such a dashboard in their preferred open source powered financial desktop of choice (perhaps with FDC3 integration with complementary tools) such that open source security reporting is a “first class citizen” among other executive level risk reporting, and as complement to existing FINOS-wide public visualizations available in LFX Insights.

Side note: Over the holiday, I got playing with an open source project in the Apache Software Foundation, DevLake, contributed by Merico, an open core company whose investors include OSS Capital, the VC firm that invests exclusively in open source companies; I think the DevLake project could provide some of the metrics and visualization building blocks. There’s also some great new stuff in the latest LFX Insights release. Lots to work from.

In conclusion …

I am really excited about all the great stuff happening in around FINOS, its incredible members, and the intersection of open source with financial services. I am still buzzing from the fantastic OSFF last month — some of us “old timers” (looking at you, Brad!) were remarking at how much the community has grown. Great stuff! Here’s to an awesome 2023!!

This post was not written by ChatGPT.

ReasonML and React

I’ve always loved programming. My first language was Basic on the Commodore 64. Later I picked up VBScript, VB, Javascript, ABAP, Ruby, and a bit of Java.

At Relay Graduate of Education, where I was CTO from 2013 to 2015, we used PHP, and specifically the Symfony framework. The conventions of Symfony coupled with its MVC pattern eliminate some, though not quite all, of the chaos and noise endemic to PHP.

I learned a lot at Relay. One important lesson I learned was that especially in smaller organizations like Relay CTOs really must allocate some of each week – I’d estimate 20% is about right – to hands-on development. CTOs shouldn’t expect they’ll be able to “out code” full time developers who spend all day coding. But for myriad reasons – not least of which is credibility – CTOs must be able to code in the languages and frameworks of the organizational tech stack.

I came to Relay right from Deloitte, and while my experience delivering large scale programs at Fortune 500 technology and media companies had taught me a lot, it had been a long time since I had done much hands-on development, and I had never developed in PHP anything but “Tinkertoy” practice projects. While reading our code, evaluating data structures, was never an issue, writing PHP code was not an area where I could lead by example. I was keenly aware of this deficiency. My team, I’m sure, picked up on my lack of sure footedness. I regretted this and I think I was a less effective leader as a result.

So after I left Relay, believing (as I do now) that those who will do best in this economy are those who are at once deep technologists AND deep strategists, I committed to filling what I thought were gaps in my formal computer science education.

Through my ongoing work in CS Education I had become aware of the popular CS50 MOOC from Harvard University offered through Coursera. But rather than take the free Coursera version, I elected to enroll directly through the Harvard University Extension School, which cost me a couple thousand dollars in tuition, but also earned me 4 graduate credits in CS and access to the professor and TAs like any other Harvard CS student. After successfully completing CS50 in May 2016, I decided to continue on and take CS61, a deep systems programming class in C and x64 Assembly in which I did projects like building my own shell and developing a virtual memory page allocation routine.

After CS61 I still had a taste for something more and decided to take CS51, “Abstraction and Design in Computation”, which probably could be just as aptly titled “Functional Programming in OCaml”. CS51 was a complete flip from CS61. CS61 was deep down in the guts of the computer, dealing with registers and low level I/O. CS51, by contrast, seemed in the ether of math and higher order functions. And OCaml presented a totally foreign syntax, at least at first.

But once I started to tune in, once I opened my mind to a new way of approaching coding, the combination of succinctness and expressiveness blew my socks off. Here, for example, is the idiomatic solution to Euclid’s algorithm in OCaml:

let rec gcd a = function
| 0 -> a
| b -> gcd b (a mod b);;

The essential idea in functional programming is that everything is an expression and expressions evaluate, via substitution rules in line with the Lambda calculus, to values. That made a ton of sense to me. So too did the did ideas like map, filter, reduce, immutable state, currying, etc. Perhaps most importantly my exposure to OCaml left me convinced that static typing is the way to go — as Yaron Minzky of Jane Street, the largest institutional user of OCaml (though Facebook is catching on fast) says, a whole class of errors just go out the window with a type system like OCaml’s.

Back to Relay for a moment – one of the last projects during my tenure was a bake off between the two most dominant Javascript frameworks, Angular and React. We ultimately chose Angular but I liked what I saw in React and kept abreast of it in the time since, developing some projects of my own using React. During that time React’s popularity has grown a ton.

So when as I was doing my final project in CS51, a meta-circular OCaml interpreter, and heard about ReasonML, “a new syntax and toolchain” for OCaml that makes OCaml a bit more accessible syntactically to developers coming from languages like Javascript, I was intrigued. But when I really got excited was when I learned it was some of the same team building ReasonML that work on React.  Thanks in large part to a transpiler from OCaml to Javascript developed at Bloomberg called Bucklescript, ReasonML is now a great way to build React applications as among numerous other benefits, ReasonML brings to React the full power of the OCaml type system. And as context, React was originally prototyped in a cousin and antecedent to OCaml, so this – React using OCaml (ReasonML) – is full circle and back to its roots for React.

There are numerous videos and tutorials out there about both ReasonML and using ReasonML with React (and React Native). If you’ve developed apps in React, I suggest you give it a try. There’s a learning curve at first for sure, but soon enough you’ll get the hang of it, and the OCaml type system coupled with capabilities like match (renamed “switch” in ReasonML) will make you a convert.

If you’re already pretty comfortable in FP, especially OCaml, and have some React in your bag, this YouTube video of a recent lecture by Jacob Bass is great. If you need a more gentle introduction, check out this ReasonReact tutorial by Jared Forsyth. Also check out the official ReasonML and ReasonReact documentation of course too. And the support from the ReasonML community on Discord is incredible.

ReasonML is still new and there are still rough spots here and there, especially when using it for production class systems. But it’s gaining momentum fast and has big companies behind it, along with a whole bunch of developers who know of course that Javascript is the lingua franca of the browser but would like to have an alternative with static typing for front end development in particular. ReasonML, which can be compiled down to Javascript via Bucklescript, is just that.

Here’s to Reason!

The Fierce Urgency of Now

Earlier today I attended a stakeholder and planning meeting for “The Campus”, the “first technology and wellness hub at a public housing site in the United States.”

Our meeting was at the Howard Houses in Brownsville, Brooklyn. It’s in the Howard Houses that The Campus operates. Brownsville and public housing projects like the Howard Houses have been largely left behind by the surge of investment (and gentrification) in Brooklyn in the last 15-20 years. Brownsville still suffers today from high levels of crime, violence, and poverty.

A key goal of “The Campus” is to provide opportunity to young people, especially young men and women of color who live in public housing. It is hoped that through technology, especially computer science, as well as programs in entrepreneurship and wellness, that we can provide youth the hope, confidence, and career skills with which to turn lives and communities around.

Tragically for one man our work came too late. About twenty minutes before the start of our meeting Rysheen Ervin, 28, still with a whole life ahead of him, was shot immediately outside our meeting room and only a few more feet from a public school. The man died of his wounds. The shooting was witnessed by my friend State Senator Jesse Hamilton, sponsor of The Campus. Senator Hamilton recorded this powerful video immediately after the shooting. This violence had a deep impact on everyone in attendance, including me.

At last week’s CSForAll Summit at The White House a key theme was broadening participation and making sure the “For All” in CSForAll is not just a platitude. On Thursday Mayor Bill De Blasio will give his one year update on New York City’s CSForAll initiative. During his speech we can expect to hear much about the city efforts to keep the “For All” in the forefront.

To complement and magnify CSForAll and the work of its foundation partner CSNYC, Borough President Eric L. Adams (also a sponsor of The Campus), his staff, myself, and a number of non-profit and private sector partners, put together CodeBrooklyn last year. The purpose of the CodeBrooklyn campaign is to champion the expansion of computer science and STEM in our schools, especially in communities like Brownsville, with the goal of establishing computer science in every Brooklyn school in 7 years — 3 years ahead of the city target. We’re still in the early days, but last year were able to help get over 80% of Brooklyn schools to participate in Hour of Code.

Senator Hamilton is a key supporter of CodeBrooklyn. Senator Hamilton held one of the first hackathons in Brownsville last year. Another supporter of CodeBrooklyn is City Councilmember Laurie Cumbo, who at a CEC 13 meeting in October 2014 literally jumped onto the stage at PS 307 to join CSNYC board chair Fred Wilson to give impromptu, moving testimony about the civil rights case for computer science.

The fight for civil rights brings to mind Dr. King. The death of this man, Mr. Ervin, literally before the eyes of those gathered to plan for The Campus, gave new relevance to these words of Dr. King:

“We are now faced with the fact, my friends, that tomorrow is today. We are confronted with the fierce urgency of now. In this unfolding conundrum of life and history, there is such a thing as being too late. Procrastination is still the thief of time. Life often leaves us standing bare, naked, and dejected with a lost opportunity. The tide in the affairs of men does not remain at flood — it ebbs. We may cry out desperately for time to pause in her passage, but time is adamant to every plea and rushes on. Over the bleached bones and jumbled residues of numerous civilizations are written the pathetic words, ‘Too late.'”

For the man murdered today,  and his murderer perhaps as well, we were “too late.”

Let me be clear – computer science education is not a panacea to all of our nation’s problems. The challenges in communities like Brownsville – or McDowell County in West Virginia – are a Gordian knot that can not be swept away with a couple lines of JavaScript. But our commitment to inclusion and participation in computer science education is a right and important first step in creating new opportunity for communities that the economy has left behind.

And so let us resolve to act, in the memory of this man killed today at the Howard Houses, with Dr. King’s “fierce urgency of now”. Let us never be “too late” again.

Code Syntax Compared

A friend of mine is getting into coding. He was asking me a bit about what language to learn and how they are different. He was curious about functions in particular.

To show him the differences I decided to write a very simple program to calculate the area of a triangle in 5 different languages. Each program is run at the command prompt. I tried to write the program in more or less the exact same way, somewhat ignoring a couple conventions in order to make each program as identical to the others as I could.

Python:

#triangle.py
#run at command prompt with python triangle.py

def triangle_area(base, height):
    area = (1.0/2) * base * height
    return area

a1 = triangle_area(10, 2)
print a1

Ruby:

#triangle.rb
#run at command prompt with ruby triangle.rb

def triangle_area(base, height)
   area = (1.0/2) * base * height
   return area
end

a1 = triangle_area(10, 2)
print a1

For all the Ruby (and Rails) vs. Python (and Django) debates, these two languages look nearly identical in these examples. That doesn’t hold true forever, though. The main difference is that Python starts the function definition (the inside of the “black box”) with a colon. The function ends when the code is no longer indented – white space matter a lot in Python compared with other languages. Ruby on the other hand does not use the colon and ends the function instead with “end”.

JavaScript:

// triangle.js
// run at command line with a program such as node – e.g., node triangle.js

function triangleArea(base, height) {
   var area = (½) * base * height;
   return area;
}

var a1 = triangleArea (10, 2);
console.log(a1);

JavaScript, in part due to its history and orientation to the web, does printing to the prompt a bit differently.

PHP:

<?php

// triangle.php
// run at command prompt with php triangle.php

function triangle_area($base, $height)
   {
      $area = (½) * $base * $height;
      return $area;
   }

$a1 = triangle_area(10, 2);
print $a1;
print “n”;

?>

Many people think PHP is ugly. I think it’s the dollar signs and question marks. Somehow it feels cheap and uncertain.

Java:

/** Triangle.java
Must be compiled first 
Run at command prompt javac Triangle.java
Then run java Triangle
**/

class Triangle {

public static double triangleArea(double base, double height)
      {
         /** Need 1.0 to get calculation to work right – indicates double **/
         double area = ((1.0/2) * base * height);
         return area;
      }

public static void main(String args[]) {
      double a1 = triangleArea(10.0, 2.0);
      System.out.println(a1 + “n”);
   }

}

This is the only example that needs to be compiled. Complied languages generally run faster and programming in languages that need to be compiled is sometimes seen as “harder core”, though that’s a somewhat outdated view. Remember – right tool for the job!

I’ll do this again soon .. maybe adding R to the mix.