Field Service

Home Builder


Custom Solutions




DocuWrx is a premiere FileMaker developer. Our flagship solution, Kosmas, is an end-to-end vertical market solution for the construction industry. DocuWrx also creates first class custom business solutions for many different industries including hospitality, retail, property management, healthcare, manufacturing, and many more. If you are looking for a custom solution for for your business or are interested in Kosmas, inquire below.

Custom Kosmas

Performance Review: Record Creation and Gathering Data

Many FileMaker developers can share horror stories of performance bottlenecks, slow running scripts, and hours spent waiting on queries. That very same frustration led me to ask the age old question; what is the fastest way I can get something done? As with all things FileMaker, the answer depends on context. Are you trying to create ten records, or ten million records? Are you trying to gather a list of primary keys for a found set of 500,000, or 5? In many cases the answer isn’t clear, and in some cases even the most experienced developer will have nothing to say except for…”that depends.”

To answer some of these burning questions, I’ve spent some time creating a small solution designed to test a few of my assumptions. I’ve included the file here for anyone to download, modify, and re-test. The file includes the demo data that you will see further down in this post. Before diving into my findings, I’ll explain exactly what I’m testing and how I’ve gone about testing it.

The solution was built to test two questions:

What is the fastest way to create new records?
What is the fastest way to collect data from every record in a found set?

To answer the first question, I identified three methods of creating new records that most developers should be familiar with. The first is a simple scripted loop using the “New Record/Request” script step. The second was exporting a set of records, and then re-importing it as new records, and then looping through those records to set any fields that needed to be changed. The last method was via a relationship with the “Allow creation of records” option checked, using a portal on an otherwise empty layout. Additionally, I had intended to test the performance of using the “Duplicate Record” script step, but I found that it caused the file size of my otherwise small solution to balloon out of control! An interesting finding, which precluded that particular method from my tests. (To those interested, it was the slowest of the methods I tested in the small-batch tests I performed with it.)

To answer the second question, I tested a scripted loop which would walk over each record using “Go To Record/Request”, a recursive custom function which used the “GetNthRecord()” script step, and the “ExecuteSQL()” script step. All three methods resulted in a $variable containing a return-delimited list of values, each corresponding to a single record in the found set.

Some additional considerations:
All tests were performed locally, on an unhosted file.
All tests were performed on a MacBook Air using OS X Yosemite v10.10.2 with a 1.3GHz Intel Core i5 processor, 4GB 1600 MHz DDR3 RAM. Your mileage may vary.

Now, for the results!

Question 1: What is the fastest way to create new records?

The short answer:
Using a portal and allowing creation of records via a relationship.

The long answer:
The performance difference between the three methods was not significant over the testing that I performed. Neither of the three methods was experiencing a slowdown over time. The performance benefit of using a portal vs. using the much simpler “New Record/Request” loop is probably not noticeable unless you are creating batches of records in the millions.

Note that all charts are on a logarithmic scale.




Chart one indicates the total record creation time for a specific batch size. The data point at a record count of 100 indicates that it took x number of seconds to create 100 records. As you can see, all three methods follow a nearly identical slope.


unnamed-6Chart two indicates the total time to create a single record for a given batch size. The data point at a record count of 100 indicates that it took x number of microseconds to create a single record in a batch of 100 records.
Question 1: What is the fastest way to create new records?

The short answer:
ExecuteSQL, by a mile!

The long answer:
I went into this test assuming that ExecuteSQL would beat out the other two options, but I did not expect the results that I saw. For a found set of only 100 records, ExecuteSQL is already nearly 10x faster than looping over records using “Go to Record/Request”. But, surprisingly, it only gets better! At a found set of ~95,000, ExecuteSQL is more than 100x faster than looping over each record. This indicates that its slope is more than 10x smaller than both other methods. For record sets over 10,000 I would always recommend using ExecuteSQL.
Note that all charts are on a logarithmic scale.

unnamed-7 unnamed-8


I hope this information can be of use to some of you out there, and if you have any questions or concerns about my methods, or any ideas for additional methods to test, feel free to email at: jnardozza@docuwrx.com