What is hogan cobol




















And it is about tracking real money, so if the system isn't exactly right, they could end up being tied up in court which makes it even tougher.

The whole article clearly show that is justified. The risk is too high to let things as is today. I work mostly for enterprise-alike customers, have work for government projects, etc.

So I'm aware of the work here. For some project Oracle bid our customers 10x more than ours at the time. But this is beside the point, anyway. Stay on mainframes. Is basic. This is the big one. This is alike find a OLD C codebase, made like 3 years ago. In the process, you document, and build testing.

You end with a better cobol or a transpiler with a proper toolchain. You start with something terrible, with a huge installed base, yet, you star moving torwards something else. In the process, you can do something alike ASM. The main trouble is that customers on the enterprise side of the world have no respect for developers, and fired them after years or just not care.

Then, when in trouble, ask for external contractors that do not have the required knowledge and maybe, not even the skills. Do it well, and this is fairly cheap for the benefit. But this require to "do it well". And that is the "hard" part, because is the human factor the big issue here.

Talk is cheap and rarely includes a full accounting of real-world constraints. Likewise, you can tell what a society's real priorities are by what they do.

Of course. But after decades of the same problem, is not time to admit that is time to change? Everybody always complains about the weather but nobody ever does anything about it. That doesn't stop people from complaining or offering conceptual solutions. If nobody pursues a particular solution, that's strong evidence about the real implementation and problem costs.

Not conclusively, of course. But as a passive observer it's usually the best evidence available. Really, the answer to the COBOL "problem" is obvious and it's what basically anybody familiar with it suggests: a piecemeal and gradual shift according to cost, benefit, and opportunity. But something has got to give down the road, right? I feel sorry for the future guy who will be forced by circumstances to do the change.

It has been for a while. I get tired of seeing these same sensationalist falsehoods propagated without challenge. So I shall challenge. I personally know of 's in the local market. A prominent financial services and BPO firm off shored mainframe development, and laid off many 's here. Most got out of software development all together. This skills shortage simply doesn't exist. I kind of suspect that when companies complain about lack of talent, what they really mean is a lack of people with the required skill set God forbid they should pay for training their own employees!

Yeap, it's true. Inflated offers via LinkedIn, Zero. Same for Perl. Why do you think this would be true? For example, I went through a mega corps cobol bootcamp in with 30 other people. They ran two courses a year up until or so. It's not as if all cobol training stopped in the 60's. DanBC on April 26, root parent prev next [—]. Age 30 in is 47 today. Is 47 really "near the end of your career"? Nearer to the end than the beginning, perhaps.

I always get a chuckle when some hopeful naif writes like they're going to retire at 50 and be done. Most everyone I know who's tried that "gets bored" and goes back to work or starts consulting after a half year trying to find their feet. Those are also the ones who when asked what happened tell you they've got plenty of good years left and are now planning on retiring at Our company has high turnover on the bottom.

A steady percentage of new hires are bored, retired people. I guess if one lives in SV, it might be. As a non-lead, non-management code slinger? It's usually well past it. JackFr on April 26, prev next [—].

While verbose, COBOL enforced structure and discipline, and it is very possible for code to be maintained by someone other than the author. On the other hand in the 90's and early 's many critical pieces of software were written in Perl. While it is quite suited as a scripting or glue language, I've seen elaborate integrated systems written in cryptic, unstructured and undisciplined Perl, often with proprietary extensions.

And no one learns Perl anymore. There is no way to maintain these projects, and in at least two projects I've seen, what should have been a straightforward change lead to a large scale retire-and-replace project. Sometimes the "no one learns Perl anymore" just means that no one is willing to pay a good money for some change that the company wants. I know Perl 5 not 6 really well and programmed in it for about 10 years.

No one I've seen actually wants to hire for it. I'm fine with that because it wasn't something I loved much like the Transact-SQL programming love Objective-C and looks like Swift will be more work not love.

Thats seems true. I have a Perl jobs group on Facebook, and the salaries are a fair bit lower than for Django which I currently do. EvanAnderson on April 26, parent prev next [—]. I dread working on Perl code I wrote even just a few years ago.

If I've context-switched away from the code for more than just a few months it often looks incomprehensible. It really is a write-only language. PERL is on of my biggest headaches. I had to learn it from my stepmother who was working at the same company and was the one handling most of the database I just needed my own version for better location tracking in the warehouse. I would've much preferred to have done that in RPG. PaulHoule on April 26, parent prev next [—].

Hell yeah. Every time I've written Perl with another programmer, I learned something new. I hope this is the case, as I quite liked doing Perl before I switched to Python. Saying that I can imagine that there is some horrendous Perl out in the wild.

Option 4, which is what banks are actually doing to the best of my knowledge, is to train new hires on how to write COBOL. Glad to see someone else pointing this out. The problem with these systems is that they are very old, and thus do not benefit from many of the more modern developments in the field nor do many quality developers learn the language.

The benefit with these systems, though, is that they are very old. With that age comes completeness. They're battle hardened, thoroughly tested, and a known quantity within the institutions that leverage them. History is littered with companies that attempted to replace the core of their business with one big project written with newer technology, only to fail catastrophically. During my first gig with a bank, I was appalled to see the ancient technology in play at the heart of the bank's systems, and the retired developers coming in part time on schedules they set for outrageous hourly rates to do maintenance tasks on that system.

Over time, though, I came to realize that this was the most reliable and cost effective solution the bank had available. However, the key thing is how the connection to the old system is made. It's a glacial pace, but it's being attempted. The heart of the old systems, though, were still untouched last I had visibility into those inner workings. You're correct on all points in my experience. My previous contracting gig involved writing Windows Server based code for a UK mortgage lender that interfaced between the internet based Hometrack valuation service, and a Fujitsu ICL clone mainframe running the core Cobol system for the lender.

The key point about that system is that it is the core ledger for all the mortgage accounts, which add up to many billions. It's the golden record of all the mortgage accounts and payments. Replacing it with a new buggy system based on the latest tech could kill the business.

AceyMan on April 26, root parent prev next [—]. True, but as you admitted, everything is "a known quantity", i. Which is cool when everything works and nothing has to change. When something breaks or you need to change something Banks are basically sitting on systems they don't fully control. They're kind of an interesting experiment: how long can you control the dragon, i.

Brass on April 26, parent prev next [—]. The language is verbose and relatively straightforward, however it requires a different approach from what most people are used to. Is it true that just for knowing the language you are getting compensated much better? How easy is the job market? It might not always be the case. Hiring young people and teaching them Cobol or whatever without paying large amounts upfront might be also a viable business practice, ie.

All depends on requirements, clients, etc. Brass on April 26, root parent next [—]. This is how I learned. After a couple years or so most of us have moved on to better compensated positions at other companies.

The issue with rewriting to newer technologies is relearning - the hard way - all the requirements encoded over years. As some posters here said already, organizations have to slowly phase in newer technologies and phase out COBOL ones. The track record of Big Bang rewrites is not a good one.

The financial and time cost overruns were spectacular. I have to wonder if a split in technology exists between commercial and investment banking. You want to loose the hardware - there's a solution for that. You want to create modern web services over - there's a solution for that. You want to code in a modern IDE - there's a solution for that. PR piece for Auka. A classic example of one of pg's submarines[1]. Don't panic! Just call Auka. They have your new piece. There's no mention of the option we would all pick, a gradual migration, of course.

And besides that, banks should do what they want to do. That COBOL software they are running is hardly ever a problem, the problems are usually in the new stuff which has been far less battle tested and is connected to a hostile network.

Bluestrike2 on April 26, root parent next [—]. Legacy COBOL systems hell, a lot of legacy code in general is a perfect example of the "if it's not broken, don't fix it" mantra.

There's an incredible amount of work involved in rewriting legacy code, and when you're talking about doing so in a constantly updating, critical context like financial transactions, it kind of highlights the need to take things slow and carefully.

It's also an incredibly expensive one, especially since it's a recurring situation. No matter how clean and clear you think your code is, or how thoroughly you've commented everything, in a decade or two, someone is probably going to be groaning about it being legacy code.

It's a pity that pg drifts off in the end assuming that the medium blog would be more serious just because of its existing. It's just that the main text content market hasn't hit the internet yet at the time of the writing. Or maybe it was part of his PR campaign to bring HN to life, who knows.

COBOL has been "dying" since the late 70s. Since that time I have seen several languages and platforms come and go. For that kind of money the business wants a technology they can hang their hat on for twenty to thirty years. So what are you going to choose?

What languages and platforms in widespread use today are going to be around in twenty to thirty years? COBOL is in that list. Yeah - I can't think of any either that warrants that kind of expenditure. It's not only banks, I took an internship at an insurance company a couple of years ago that had their entire mainframe written in COBOL. Not only do they have a hard time finding people to replace their retiring veteran developers, but for smaller companies like this one that can't afford to pay ridiculous salaries for a top notch COBOL dev, they have to settle for mediocre aging developers that can write COBOL and are on the job market.

These devs are getting paid good money to work on critical systems and aren't skilled enough to properly maintain them. It would be so much cheaper for these companies to pay better devs to do more recent tech.

But it's hard for them to get out of that loop. Makes me rethink where I have my money. Get your bid in! The Government will only accept offers from this sic small business concerns. All other firms are deemed ineligible to submit offers. Well what about the other couple-dozen genders that seem to have surfaced in recent years? Hopefully in twenty years that kind of complexity has been abstracted out of existence. Microservices may still exist as an organizational concept but I'd be stunned if we're still thinking about things like containers in 20 years.

Every generation of developers thinks their stuff will be replaced at some point but then they realize it's still used twenty years later. Happended with Y2K, Linux timestamps and a lot of other things. Tom4hawk on April 26, parent prev next [—]. For now they are both actively maintained and used for new projects - so we are fine for at least next 50 years. My limited experience under a government contractor sadly contradicts this: the primary requirement for many projects is to fit inside of a server rack and "just work" when hooked up to power and the network.

None of my work was in NodeJS, but I'm positive that JavaScript will be floating around the backends and frontends of our government's services for a long time. Tom4hawk on April 26, root parent next [—].

Question is: what bad would happen if you pull out this project from rack? In most cases, you'd probably make the employees at whatever agency or department very unproductive for the next few days or however long it takes to find the server and plug it back in. The kind of projects I'm talking about are usually internal and critical for getting the right data to the right people within an agency.

Tom4hawk on April 26, root parent prev next [—]. At what point does something become critical, in your book? The kinds of services I'm describing are critical to the normal functioning of government agencies. They're running now, and they'll probably be running for the next 25 years at which point another contractor will upgrade them to something new and shiny. That's why it is possible to change those systems every 25 years. They will be less productive after software change, they will face new bugs in transition period.

A lot but not all! TFortunato on April 26, root parent next [—]. Depends on the government agency I'd say. For example: unemployement or social security checks going out, VA Hospital systems, etc. Running on AWS and doing it correctly so that one of the many regional outages does not cause you an issue is really hard and is a software problem. On the other hand using a system of tested proven code that runs on a system that is redundant by design at the lowest level of component a mainframe cluster where the software does not have to have knowledge of the redundancy is a bit simpler.

I really think people miss this very important point. Redundancy is transparent for For something like a modern application on AWS you have to know, understand and code for the infrastructure. We're talking about two different issues here.

Second is the issue of hosting and maintaining your own hardware. On the first issue I'll admit I don't know enough about mainframes, perhaps they provide redundancy transparently to the programmer, but I'm a bit skeptical. Perhaps you simply think the redundancy is transparent because you understand the mainframe infrastructure deeply and internalized it? The second issue I'll push back on.

So that's about 4 hours of downtime due to outage a year. Put your servers in a multi-AZ autoscaling group and you're pretty solid. Bonus points for having autoscaling groups in more than one region.

If you don't want to use any of the other services AWS provides, you can simply use them as a basic hosting service and get pretty amazing uptime. I've never used an old mainframe system, can you get greater uptime than that?

Including hardware failures, power outages, network outages? There was one point in my life when I was working for a large mainframe company ; A bank wanted to move a system to a new site. The uptime on that system was 11 years. Just have a google for "mainframe uptime".

Yes the software is abstracted from the HW redundancy. Cons of Hogan. Sign up to add or upvote cons Make informed product decisions. What is Hogan. What companies use Hogan. Sign up to get full access to all the companies Make informed product decisions. What tools integrate with Hogan. Python is a general purpose programming language created by Guido Van Rossum.

Python is most praised for its elegant syntax and readable code, if you are just beginning your programming career python suits you best. Java is a programming language and computing platform first released by Sun Microsystems in Clearly the raw code is available for inspection. But to understand what a Hogan system does requires requires a deep understanding of how the PEM meta-data drives how program elements are sequenced and which programs are accomplishing which tasks on what databases.

Without the instance PEM meta-data, one simply cannot determine the call structure. In general, almost all linkages between the Hogan elements are completely controlled by the PEM instance meta-data. To plan a trip, you not only need a map showing the cities, but you also need to know specifically how the roads connect them see GUI example below. Using such conventional tools is a big improvement over just inspecting code manually, but does not address the key issues to make it difficult for Hogan users to understand and adjust their Hogan applications.

Such tools also require that such users have deep knowledge of Hogan and code, which is increasingly rare in IT environments with strained budgets. But failure to understand or revise the applications correctly can lead to both severe internal and external issues, precisely because this is core banking software. Owners of Hogan software are always clamoring for better ways to see how Hogan is organized from their perspective.



0コメント

  • 1000 / 1000