It is currently Tue Mar 19, 2024 2:18 am


All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 3 posts ] 
Author Message
 Post subject: Replication, Redundancy, Scalability
PostPosted: Sat Apr 27, 2013 4:55 am 

Joined: Sat Apr 27, 2013 3:19 am
Posts: 2
Location: Honolulu Hawaii
Real Name: Curtis Kropar
Began Programming in MUMPS: 0- 7-1985
Aloha !

So, I have some questions about the new/latest implementations of MUMPS.

I am an old school mumps programmer from way back in 1985. have written some pretty hefty systems with it over the past 10,000 years. But have not touched it much in the past 8-10 years.

I used to use MSM (Micronetics - turned into Inter Systems) then GT.M for a while.

Most people are aware there is a difference between MUMPS the Language and MUMPS the database. However, many people are not aware that MUMPS was actually 3 different things, not just two. MUMPS could also run as the operating system.

My questions revolve around what was once built in functionality of MUMPS. Before windows, before www, before modern networking... (we used to use serial ports !)

First Question - What are the available versions of MUMPS ? Cache and GT.M - I know of, any others ? Cache costs too much for our clients to even consider so I am interested in free.

Here is what I want to know, if the current MUMPS is like the old one and what can it do.

In the old days :
1) Mumps database size was pretty much limitless. If you ran out of space you simply created a new database on a new hard drive and mounted / mapped it into the active mumps system. You could expand the data to virtually any scale you needed. This included drive farms and distributed data - being able to put drives in other computers and tell the database those other drives in those other computers are part of the single database.
Can the current MUMPS do that ? GT.M or something else ?

2) With this mounting/Mapping mentioned in #1, it was also possible to do full data replication and fail over. In other words you could have 5 different computers across the country writing the same data to their respective hard drives at the same time. If one of them died, big deal, you could just have your systems retrieve data from one of the other machines.
Can the current MUMPS do that ? GT.M or something else ? (without implementing a $100,000 hardware "solution")

3) Is there a current easy method of redistributing and rebuilding replicated data if a computer goes offline or is isolated from the replicated cluster/database. Basically a full resynchronization in two or multiple directions.

The Scenario : 3 Replicated databases. 1 at clients main site, 1 at our web server, 1 remote backup.
a) The clients main site looses internet connection (it happens almost daily - don't ask) but is still running,. Activity and updates are still happening on their local machine.
b) Remotely, the clients are out at their other 3 sites and are doing updates, but those are hitting our web servers.
c) An hour later, the main site gets back online. All of the updates from the local server need to be replicated over to the web server and remote backup. All of the updates from the web server need to be replicated over to the local machine.

I am thinking it can be done through transactions and the journaling. Is there an easier /better way ?

I have other questions but these are a good starting point.

Currently we have a web based piece of software we designed that clients access it and manage all of their day to day activity. Several of the clients loose their internet connections on an almost daily basis. This leaves them dead in the water as they do not have any local servers running our system. Our current system is designed using ASP code (Active Server Pages) and MS SQL server (2000) We have MS SQL 2005 and it has replication, but even if we migrate over to it on our server, there is still no way the clients can afford to install it at every site in order to do replication. So for the past like 2 years I have been considering switching databases to something else, and remembered that MUMPS was the shizzile when it came to this type of thing.

Is it still ?

We are looking to install Linux servers at many of our clients sites to handle other tasks. Since we are considering that, it may make sense now to migrate over to MUMPS for the primary data as well. Our web servers however are still windows based servers, but that could possibly be adjusted too.

THANKS for your feedback and attention.

_________________
--------------
http://www.HawaiianHope.org
Providing Technology to non profit orgs, Shelters, Food Pantries and more. To date we have given away over 900 FREE computers! Got a computer you are no longer using? Don't Recycle(Scrap) It! Donate it!


Top
Offline Profile  
 
 Post subject: Re: Replication, Redundancy, Scalability
PostPosted: Tue Apr 30, 2013 2:22 pm 

Joined: Mon Nov 15, 2010 3:56 am
Posts: 2
Location: Malvern, PA, USA
Real Name: K.S. Bhaskar
Began Programming in MUMPS: 02 Jan 1995
GT.M's limits are that one global must fit in one database file, but there is no limit on the number of database files. An individual database file is limited to 992Mi blocks, so with the popular 4KiB block size, the maximum size of a single global is 3,968GiB. Maximum block size is 65.024 bytes.

Apropos replication, with GT.M, all business logic is executed on one instance, but it can replicate to as many as 16 secondary instances, each of which can replicate to 16 tertiary instances, etc. For an understanding of what you can do with replication, please take a look at the replication chapter in the GT.M Administration and Operations Guide, GT.M Edition, and then look at the GT.M Acculturation Workshop.

For all GT.M user documentation, go to http://fis-gtm.com and click on the User Documentation tab. For the GT.M Acculturation Workshop, and other downloads, go to the GT.M project at Source Forge (http://sf.net/projects/fis-gtm) and click on the Files tab.

Note that with GT.M triggers, it is also possible to create replication scenarios and configurations with application code.


Top
Offline Profile  
 
 Post subject: Re: Replication, Redundancy, Scalability
PostPosted: Tue Apr 30, 2013 7:59 pm 
User avatar

Joined: Mon Nov 01, 2010 3:33 pm
Posts: 104
Location: Australia
Real Name: Ray Newman
Began Programming in MUMPS: 01 Jul 1976
MUMPS V1 allows a database of up to half a Petabyte (512 Terrabytes). It all goes in one file which would have to be managed by the O/S. Recovery is only via journals (which don't need to be on the same disk - I use dedicated mountable disks).

MUMPS V1 is a full 'in database' model like DSM.

and it's free - see http://sourceforge.net/projects/mumps/


Ray Newman


Top
Offline Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 3 posts ] 

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
Theme created StylerBB.net