|
Project Information
Project Outline
Initial Milestones
Formal Plan
Contingencies
Revision of Milestones
Reports
Log Book
The main goal of this project is to increase the flexibility and decrease the
cost of the Remedi CRM application. CRM stands for Customer Relationship
Management, and is an application used by companies to manage the relationships
they have with their customers. Remedi is an instance of a CRM application
which is targeted specifically towards pharmaceutical companies who wish to
market their drugs in New Zealand.
Remedi is quite dependent on the Microsoft platform. If this dependence was
remove though, Remedi would obviously start to gain a few advantages. The first
of which would be that it would increase the flexibility of Remedi.
Flexibility in the sense that if the interface wasn't tied down to MS SQL and
Access, the amount of people which Remedi could be used by would increase to
include people who may not be familiar with for example, MS SQL, and who would
feel more comfortable using something like Oracle. Flexibility in the business
world though isn't usually reason enough to modify an application, and it has
to be weighed against factors such as the costs associated to do the
modification.
Costs at the moment to get Remedi up and running are quite numerous. This
includes the MS SQL server licences, along with the hardware the server needs
to run on, not to mention the client machines which also need to be purchased,
and also their respective licences which are needed to run both MS SQL and
Windows. This can easily already add up to tens of thousands of dollars and
this, so far, is all without the cost of Remedi included which usually starts
of at around the same amount.
The goal to increase flexibility can be easily satisfied by just converting
Remedi to a new environment. To satisfy the condition to keep costs of the
conversion down is a little more difficult because of the labour and software /
hardware costs involved. This could though, be made simpler to satisfy if the
cost of the software itself was minimal or free. With the software free,
advantages would also appear for the client as they can now cut out a huge
amount of the cost associated with just setting up the platform to run Remedi
on. This is one of the reasons why we primarily picked open source software as
the new base platform for Remedi instead of other commercial platforms.
The process for doing this conversion could also be useful, as the
documentation and problems encountered during could also be collected and with
that information, build a knowledge base so as shifts to other platforms can
refer back to difficulties encountered with the initial conversion.
In this section are the original three milestones which were planned at the
beginning of the year, used to describe what was wanted to be achieved.
The first thing to do was to first get aquatinted with the way Remedi worked.
After that, it was to figure out exactly what pieces of Remedi that could be
converted, and find out what software could be used to replace those pieces.
Once this stage was completed, the following stages could then be written out.
|
One of the things which could be converted was the database back end. From
the planning and research stage, it was discovered that PostgreSQL (an open
source database) was the best database for the job.
The first stage of the project is then, was to port over the existing MS SQL
database back end to the PostgreSQL database.
|
The second thing which was possible to convert was the front end. Research
revealed through the planning stage brought us to the conclusion Borland Kylix
would be the best suited for this stage.
It was also questioned if the Kylix front end to be interchangeable with the
existing MS SQL back end. This question was never answered but what was in
agreement though, was that compatibility between the PostgreSQL back end and
the Access 97 front end should be a requirement.
|
This stage was just an additional stage which was added if we had time to
spare. This was the addition of a scaled down version of Remedi for the Palm
environment.
With this, it would allow sales reps to carry around Palms so data could be
entered directly into them, thereby increase efficiency over the current system
where sales reps would take notes in a notebook and then inputting their notes
into their computers when they return to the office.
|
With the three main steps in place, the following formal plan was written up:
- Import the current MS SQL database backend into PostgreSQL.
- Test PostgreSQL server with current VBA front end. This in itself would
allow current customers to migrate to a Linux backend and also provide the
first step for a migration plan.
- Chose and commit to development environment.
- Code all basic maintenance screens.
- Code direct marketing master form.
- Code CRM master form.
- Code import / export feature.
- Test the above. This would then gives us the second step in a migration
plan, which is a full database data entry system which would only be missing
the reporting and querying functionality. It would probably still be possible
to implement this with the help of third party tools.
- Consider either creating / recording or reporting or querying functionality
or find an adequate third party solution. Full Remedi Replacement would be
available
- If time permits look at coding Palm solution.
It should be noted that in the planning stages of the goals above, it was also
taken into account that there could be the risk of arriving at a step and not
being able to complete it. Because of this, the way the steps were structured
allowed the project to still hold value had not all the stages been complete.
As can be seen in step 2 and 8 of the formal plan, were they have been marked
as a "safe" place to arrive at, so if those steps were reached, the work done
up to that point could still be used.
After the finish of the first half of the year, it was soon apparent that the
goals that were set out above were a little ambitious to achieve for the time
remaining. Instead, for the remainder of the year the focus was on the
complete porting over of the back end database to allow the first half of the
migration plan to become possible.
Introductory Seminar (download)
Mid-Semester Report (download)
Mid-Semester Seminar (download)
Mid-Semester Seminar Summary (download)
Draft Final Semester Report (download)
Draft Final Semester Report 2 (download)
Real Final Semester Report (download)
Final Seminar (download)
Final Seminar Summary (download)
Monday 3 March
- Researching other CRM packages in Linux.
- Compiere (www.compiere.org) seems to be the only other solid stand alone package which does CRM functionality.
Wednesday 5 March
- Meeting with Mano and BTech group today to finalise our projects.
Tuesday 6 March
- Started researching RAD tools for development in Linux.
- Omnis
- Borland Kylix (final choice)
- Known and reputable brand
- Good support base
- Cross Platform
- Free - Open Edition (which is linked against GPL'd libraries and thus in turn has to be released under the GPL). Developers version can also be purchased if a release is required to be released under a commercial license.
Friday 7 March
- Finished download of Kylix Open Edition from Borland.
- Started going through a few Kylix tutorials.
Tuesday 11 March
- Researching different database for Linux.
- MySQL
- PostgreSQL (final choice)
- Support for larger number of functions which are required.
Tuesday 18 March
- Installed and set up PostgreSQL on a Pentium 150 machine with 64 MB or RAM and a 4 GB HDD running Linux. Will upgrade to another box depending on performance.
Friday 21 March
- Got back up copy of Remedi back end from MediMedia (MS SQL 2000 version)
- Stored in ./stage 1/source/mmdb_db_200301072200.bak.zip
- Also got a copy of Remedi demo CD (just like normal Remedi except the front and back end is coded in Access 97 with a limited data set)
Saturday 29 March
- Started conversion of Access 97 backend to PostgreSQL.
- Found "ConvertToPostgreSQL.mdb" from SevaInc.com which converts from Access 97 to PostgreSQL. Conversion was successful.
Thursday 3 April
- Evaluated re-creating missing tables which were lacking in the demo version of Remedi instead of porting over the MS SQL database over to PostgreSQL.
- Decided against it.
- Installed MS SQL 2000 on a Windows 2000 machine (a 1 GB Duron with 384 MB RAM) and imported MS SQL 2000 back up of MS SQL Remedi back end into MS SQL. This Machine was the same machine which I am currently working on with the front end. The Pentium 150 is a separate machine which I’m using for the PostgreSQL server (which runs Linux).
Monday 14 April
- Found modelling tool called Data Architect from thekompany.org (people who make KDE) to convert MS SQL 2000 back end to PostgreSQL back end.
- Importing failed
- Downloaded and tried older versions on different Windows platforms but still failed.
- Googled for solution, none found.
Wednesday 16 April
- Downloaded Omnis, installed and tested.
Thursday 17 April
- Continued testing Omnis. Too buggy.
Wednesday 23 April
- Started scripting database conversion because of inability to find suitable conversion tools.
- Exported SQL scripts from Enterprise Manager.
- Find / replaced terms which weren’t compatible with PostgreSQL or ANSI SQL statements.
- http://techdocs.postgresql.org/techdocs/convertsqlsvr2pgsql.php
Thursday 1 May
- Found pgAdminII with migration wizard plugin .
- Importing tables one at a time into PostgreSQL due to dependencies of primary keys.
Thursday 8 May
- Started on checking and cleaning up back end data .
- Correct field types
- Proper row count (no rows missing, etc)
Thursday 5 June
- Presented mid-semester seminar.
Friday 6 June
- Handed in mid-semester report.
Saturday 7 June
- Some more background research,
Sunday 8 June
- Started attribute comparison (finished about half way)
NOT NULLs or default values of fields
Primary keys converted over alright
Sequences (auto numbers) created alright (but still won't work in Access in the current state, will do research on it later)
Monday 9 June
- Finished attribute comparison
- Creating views
- Used existing SELECTstatements from MS SQL
- No complex SQL, copied straight through from MS SQL
- Used single quotes instead of double quotes
- Removed dbo.mmdb. prefixes from table names
Tuesday 10 June
- Using row counts to check data imported correctly
- Used MS Access w/ ODBC links to each back end to check row counts of each table are the same.
- Visual Inspection to check data import, also via ODBC connection via MS Access.
Wednesday 11 June
- Finished data checking
- Booleans are 0 and 1 in PostgreSQL and 0 and -1 in MS SQL (easy to change with PostgreSQL ODBC driver)
- Found out binary data which pgAdminII was complaining about during the importing phase was the MS SQL timestamp field, which is meant to speed up record browsing performance in the Access front end, but after talking to Colin, discovered that little or no advantages have been noticed.
- Views are taking a very long time to display in PostgreSQL compared with MS SQL
- Could be due to the machine differences
- Discovered indexes are required to get views working at a reasonable speed.
- Have to recreate indexes in PostgreSQL (will do research on this later).
- Linking up to front end of database using frontend.zip found in ./stage 1/source to PostgreSQL back end
- This was achieved by deleting all references to the existing tables and re-linking them via ODBC to the new PostgreSQL back end.
- The linking was successful I think, but we’re stuck on the log in screen. Not all the drop down boxes are populating so no users can log in properly.
Thursday 3 August
- Checked all indexes were created correctly.
- Automatically done with pgAdminII but still checked them anyway
- Site regarding index creation in pgAdminII Migration Wizard http://archives.postgresql.org/pgadmin-support/2003-01/msg00027.php
- Reading up on indexes in PostgreSQL
Saturday 5 August
- Installed PostgreSQL on 1 GB Duron with 384 MB RAM (same as the MS SQL 2000 server machine).
- Used to see if either indexes or machine speed is the slow down (currently, PostgreSQL database is on a Pentium 150 machine which could be the cause of the slow views).
- Opened view to see if views opens any quicker. It opened the v_adis_doctor table in about 15 minutes. I don’t know how long views took to open on the Pentium 150 machine as I gave up after half an hour.
- Linked Access front end to MS SQL instead of PostgreSQL to see if the log in problem I had the other day was because of a corrupted MS SQL database which I then imported to PostgreSQL.
- The MS SQL database worked flawlessly.
- Renamed the table names in the PostgreSQL version so they’re all capital letters (just like the MS SQL database). Still doesn’t work.
- Mailed Colin, waiting for reply
Sunday 6 August
- Checked v_adis_doctor and v_adis_doctor_ids views were created correctly (just checked the syntax used). Still no change in speed.
- Checked indexes were created in the tables related to the views, and they all were there
- Re-linked PostgreSQL back end to front end again to see if it works via PostgreSQL (maybe corrupted links?). Still doesn’t work.
Tuesday 8 August
- Compared tables present in Access front end linked to MS SQL back end and Access front end linked to PostgreSQL to see if both contain the same data. All tables contained the same number of rows and the data was displayed correctly (i.e. booleans 1/0 vs. -1/0, date formats, etc).
Sunday 17 August
- Brought computers over to Colin's place for hands on help.
- Performance of views
- Discovered Remedi front end has optimisations put in place if it is connected to an MS SQL back end. If it is connected to an MS SQL back end, it just requests the view directly from the MS SQL server, but if it isn’t connected to the MS SQL database, it requests all the tables associated on that view, and then performs the query locally. One of the things which trigger this is the existence of the v_* queries. To make it so the views are created on the server, these v_* queries have to be deleted, and the v_* views have to be linked via ODBC as tables.
- Note to self: to re-link the views, the primary keys of the views have to be set as the aggregate of all the primary keys in the tables which the view refers to. Go through this slowly so as not to miss any primary keys.
- Log in form (with drop down menus)
- The form is called frmSetRep Identity.
- Checked out what populates the drop down menus.
- Found out second field wasn’t working because a view which the drop down menu populate its list from is not being generated properly.
- The ID fields of the pracdoc table was an auto-numbers field in MS SQL, but in PostgreSQL they are no such equivalents. They have to be fudged. Because of this, the auto-number field was set as a text field instead of as an int. Maybe this is the cause of the view missing some extra rows (check this later).
- Meeting was about 5 hours
Wednesday 20 August
- Printed out some ERD diagrams
- core model
- central entities
- call reporting
- territory structure
- priority
- worked out hours
- downloaded + read some past reports
Thursday 21 August
- Tidied up log, read some more past papers
Friday 22 August
- Tidied and updated log book
- Created to do list for Saturday
Saturday 23 August
- Re-linked views to front end
- The primary keys of each are
v_adis_doctor
doctor: doctor_id
v_adis_doctor_ids: category_id, speciality_id, doctor_id
v_adis_doctor_ids
doctor_specialty_grouping: category_id, speciality_id, doctor_id
v_adis_dsg
doctor_specialty_grouping: category_id, speciality_id, doctor_id
v_adis_prac_ids
practice: practice_id
v_adis_pracdoc
pracdoc: affiliation_id, pracdoc_type_id, practice_id, doctor_id
v_adis_prac_ids: practice_id
doctor: doctor_id
v_adis_doctor_ids: category_id, speciality_id, doctor_id
v_adis_practice
practice: practice_id
v_adis_prac_ids: practice_id
v_doctor_multiple_specs
doctor: doctor_id
doctor_specialty_grouping: category_id, speciality_id, doctor_id
v_doctor_primary_specs
doctor: doctor_id
doctor_specialty_grouping: category_id, speciality_id, doctor_id
v_doctor_primary_specs_work
doctor: doctor_id
doctor_specialty_grouping: category_id, speciality_id, doctor_id
pracdoc: affiliation_id, pracdoc_type_id, practice_id, doctor_id
practice: practice_id
v_doctor_secondary
doctor: doctor_id
doctor_specialty_grouping: category_id, speciality_id, doctor_id
v_dr_prim_specs_work_secndry
doctor: doctor_id
doctor_specialty_grouping: category_id, speciality_id, doctor_id
pracdoc: affiliation_id, pracdoc_type_id, practice_id, doctor_id
practice: practice_id
v_notespump_mmclient_people
v_doctor_primary_specs_work: doctor_id, category_id, speciality_id, doctor_id
practice_id, affiliation_id, pracdoc_type_id
doctors_priority: practice_id, priority_list_id, doctor_id
v_novo_doctor_ids
doctor_specialty_grouping: category_id, speciality_id, doctor_id
v_pms_doctor
doctor: doctor_id
v_pms_doctor_ids: category_id, speciality_id, doctor_id
v_pms_doctor_ids
doctor_specialty_grouping: category_id, speciality_id, doctor_id
v_pms_dsg
doctor_specialty_grouping: category_id, speciality_id, doctor_id
v_pms_doctor_ids: category_id, speciality_id, doctor_id
v_pms_prac_ids
practice: practice_id
v_pms_pracdoc
pracdoc: affiliation_id, pracdoc_type_id, practice_id, doctor_id
v_pms_prac_ids: practice_id
v_pms_doctor_ids: category_id, speciality_id, doctor_id
v_pms_practice
practice: practice_id
v_pms_prac_ids: practice_id
v_pms_practice_ids
practice: practice_id
v_primary_specs
doctor: doctor_id
doctor_specialty_grouping: category_id, speciality_id, doctor_id
v_priority_lists_doctor_count
doctors_priority: practice_id, priority_list_id, doctor_id
doctor: doctor_id
v_terr_brick_rep
terrbrck: division_id, territory_id, brick_id
terrmst: division_id, territory_id
brick: brick_id
- Found a missing view which wasn’t created (v_dr_prim_specs_work_secondary). Having some trouble with the create statement of the view
- Changing auto-number field in the doctor table from a text field to a numeric field
- doctor_id | int | not null default nextval(‘doctor_doctor_id_key’::text)
- Have to change the "text" section to a number? Maybe that was why Access was recognising the doctored field as a text field instead of a numeric / auto-number?
- The ::text isn’t changeable.
- Created a test PostgreSQL database with an auto-number field. It seems to be a displayed as a text field in Access only when the field is a "int", but not when it’s a numeric or a regular int.
- Found an option to display ints (or int8) fields as int4s in the ODBC driver, but when the table is loaded up in Access, it doesn’t display it properly (it shows deleted fields).
- I doubt this is causing the problems with the view displaying properly
Sunday 24 August
- Going through the login form to see what it is exactly which is blocking it up (frmSetRep Identity).
- The second column is dependent on the view v_doctor_primary_specs_work.
- in MS SQL shows it to contain 34,519 rows, where as in PostgreSQL, it only has 20,152 rows.
- I ran part of the view, specifically the section below
select count(*)
from pracdoc
where pracdoc.pracdoc_type_id = 'P'
and noticed that the results from MS SQL were 22,677 rows, but in PostgreSQL it was only 22,671 rows. 6 rows were missing. I tried using a P which wasn’t capitalised, and it displayed the 6 missing rows.
- PostgreSQL was case sensitive. Should I change all the Ps in the form to be capital, or should I restructure the view? For the time being, I’ll modify the view so that it is case insensitive (instead of modifying the data) using the UPPER() function.
- The v_doctor_promary_specs_work has another view called v_notespump_mmclient_people which depends on it and needs to be re-created if I am to modify the v_doctor_promary_specs_work view.
- Note to self, re-create this view later.
Sunday 31 August
- Picked up a Pentium III 666MHz machine from Colin to install PostgreSQL on.
Tuesday 3 September
- Installed Linux and PostgreSQL on the Pentium III machine and imported the database from the old Pentium 130 machine.
Wednesday 4 September
- Discovered that MS SQL treats nulls differently than to PostgreSQL.
- Found via exporting views to a .sql text file instead of reading the view straight off.
- They’re two types of nulls apparently. ANSI nulls (which means there’s physically no data in the cell at all) and nulls which just mean that the value of the cell is either 0 or false. Most databases by default, when you do a statement below:
select *
from pracdoc
where pracdoc_type_id = NULL
does a search for just plain old ANSI nulls. With MS SQL, there’s a setting called ANSI_NULLSwhich when set to false, does a search for ANSI nulls and cells which are 0 or false.
- There is no equivalent setting in PostgreSQL.
- Found that the IS NULL comparison is a good enough substitute to work to return both rows which ANSI nulls and nulls which are 0 and false.
- The view v_doctor_promary_specs_work now have the same row count in PostgreSQL and MS SQL.
Thursday 5 September
- Gone through the rest of the views to see if they all work under the new P3 666 machine.
- Found a few views which return different row counts in PostgreSQL. They were views with the NOT NULL and UPPER() problems I mentioned before.
- v_doctor_primary_specs
- v_primary_specs
- v_priority_lists_doctor_count
- Found some views which took ages to load, and they seem to all follow the syntax below
select *
from practice
where practice_id in (select practice_id from v_pms_prac_id)
This was fine in MS SQL to load, but in PostgreSQL it took ages to load. Instead I rewrote the so it looks like this:
select practice.*
from practice, v_pms_prac_id
where practice.practice_id = v_pms_prac_id.practice_id
and that seemed to speed things up. I’ll have to confirm with Colin before I convert the views over. The views which suffer from this problem are:
- v_adis_doctor
- v_adis_pracdoc
- v_adis_practice
- v_pms_doctor
- v_pms_dsg
– Had to leave the IN() statement in cos it was slower with it (the join was huge)
- v_pms_pracdoc
- v_pms_practice
- I re-linked the database so it points to the new PIII 666 server and with the new v_primary_specs_work view and it showed all the drop down boxes correctly.
- The problem now is that once you log in, it quits with an error, preventing me from getting into the main view screen.
Friday 6 September
- Re-created the previous views which suffered from the IN() problem
- Also discovered that the UPPER() function takes awhile to load too, so instead I found another function called the ILIKE function which is like the = clause which increase case insensitive searches i.e.
select *
from pracdoc
where pracdoc_type_id ilike 'p'
The were also regular expressions which were able to perform case insensitive searches, but they seem to slow down searches as well so I decided against them.
Sunday 8 September
- Fixed the views with the different row counts, but didn’t manage to fix v_priority_lists_doctor_count because of a weird case sensitivity problem. Am still racking my brains out over it :/.
Monday 9 September
- Ok, so the form logs in, and the problem seems to be with the query qry_setrep_set_report_header_company, which is executed when all the details have been entered. It’s an update query, whatever that means …
Friday 12 September
- Had to return Gates back to Colin :/
Monday 15 September
4.22) Why are my subqueries using IN so slow?
Currently, we join subqueries to outer queries by sequentially scanning the result of the subquery for each row of the outer query. If the subquery returns only a few rows and the outer query returns many rows, IN is fastest. To speed up other queries, replace IN with EXISTS:
SELECT *
FROM tab
WHERE col IN (SELECT subcol FROM subtab);
to:
SELECT *
FROM tab
WHERE EXISTS (SELECT subcol FROM subtab WHERE subcol = col);
For this to be fast, subcol should be an indexed column. We hope to fix this limitation in a future release.
- I don’t have gates so I can only guess the reason why this is slow.
Wednesday 17 September
- Managed to secure a Pentium3 500MHz from a friend. Just did a hard drive transplant with my existing Pentium 150 box to get it up and running again.
Thursday 18 September
- Finally created the v_dr_prim_specs_work_secndry view. I thought it might have been the inner and outer joins not working but eventually isolated it down to the raw SELECT statement itself. In MS SQL, AS key words aren’t required where as in PostgreSQL they are.
- Also finally created the v_priority_lists_doctor_count view which was giving me problems with the group by function.
- Funny things happened with the GROUP BY and COUNT functions which caused rows to be displayed twice because of PostgreSQL counting rows with different lower / upper case as two different rows when in actual fact they are the same.
- Did a row count on all views.
Friday 26 September
- At Colin’s place.
- Figured out the reason why the log in form wasn’t working was because of primary keys being set as text instead of as numbers (as mentioned before but I dropped it because I found out the view wasn’t actually affected by the auto-number field).
- As mentioned before, the reason why it was displayed as a text field is because for some strange reason, pgAdminII when it exports autonumbers, exports them as big ints (64bit ints). So I have to change them to regular ints, which are 32bit ints.
- Had a look through at all the columns and realised that they’re only a few tables (19 to be exact) which have sequences.
- So my plan is to create a new column which contains a regular int, copy over all the values from the big int column, drop the big int column, then rename the regular int column back to what the big int column was.
- Problem is that I can’t drop the big int column because of referential integrity. I found the following document which I though might help
- Went thought that for a bit and finished the day on that.
Tuesday 30 September
- Thought of a wicked idea. Why not just export the database to a text file (like how I’ve been backing up the database) and renamed all the ints to regular ints, then re-import the text file back into PostgreSQL again! So I tried that, easy sed statement fixed all the ints, and re-imported perfectly.
- Opened up an example database with some random links to the database and it seems to work pretty good (i.e. the autonumbers seem to work).
- Re-linking the database yet again to the front end. Note, there is a refresh table statement in Access but from what Colin’s told me, it’s usually best to re-link the whole database because the refreshes aren’t usually very reliable.
- (Note the date today, they are two databases backed up, one is of the old database (with ints) and the new one is with the regular ints).
- Comparing the column types within Access between the old MS SQL database and the new PostgreSQL database. (doing this visually in MS Access design view).
- All fields seem to be ok, except for the practice_priority table which listed the dollar_1, dollar_2, and dollar_3 fields as numbers instead of as currency.
- Linked up the essential views (the ones which were listed in the queries) instead of all of them. Makes my life easier since I don’t have to select all the primary keys again for the views. I’m pretty sure I don’t need all the views, if one pops up I’ll just re-link it again.
- Logged in and the second drop down box was missing some entries. This is strange considering that I got this fixed in the last implementation.
- Checked the view and it was alright (correct row count with the MS SQL database)
- Then check the form and discovered that I forgot that in the last implementation I modified the form to account for upper and lower case, specifically, in the qry_setrep_choose_rep query with the criteria for the category_id column which displays In ("reps","coyper"), which I then modified to say In ("reps","coyper","REPS","COYPER") which collected the remaining rows. Have to mail Colin about this.
Wednesday 31 September
- Going through the frmDoctorBuilkInforScreen_v_doctor_primary_specs_work form today.
- The following inconsistencies arose between the default PostgreSQL form and the MS SQL form:
- Information Tab
: DX Number is displayed in PostgreSQL but not in MS SQL (good imho, probably don’t have to worry about it).
Information Tab: Place Of Grad not displayed in either PostgreSQL or MS SQL
Specially Tab: The sub-form only shows one entry in PostgreSQL (even though they’re six of them in MS SQL)
Locations Tab: Same problem as above.
Dr Priority Tab: This form is mainly blank in the MS SQL form and same with the PostgreSQL form so I’m not sure if there’s a problem. I’ll check this out soon.
Practice Priority Tab: Nothing in most forms to compare with. Will try and find one later
- Started work on the speciality tab. It seems to work but only displays one row when they are multiple rows which need to be displayed.
- The sub-form is populated via a script instead of via the normal way (i.e., via the form itself). This script calls a query, and puts the results of the query onto the form.
- Created a fake database to emulate this and it works perfect (thinking that scripting could be the bad thing).
- Noticed back in the main form, that all the script does is call a query, which calls a table. Instead of getting the form to run the script, I got the data straight from the table. This worked!
- I then changed the form to get the data from the query, this worked too.
- The only thing which could be wrong is the script itself but since the fake database worked, I have no idea why this wouldn’t. Have to mail Colin if it’s alright to just bypass the script altogether.
- Had a look at the practice_priority table again and noticed that described in the PostgreSQL database, the field is already described as money (the PostgreSQL equivalent to the currency field in Access). Doing a google right now
|