I think that ROM division has more to with cost/device density, speed (interleaving), space and processor interface, plus anything else that favored the design, bearing in mind that "cost" has many factors -- part management/availability, programming, assembly, etc.
The simple answer that interleaving with smaller parts was faster and cheaper.
By "interleaving," I mean that the II FDHD/IIx ROMs I just burned are byte-aligned, so given the address line layout on the board, the processor can simultaneously read longs (4 bytes) or shorts (2 bytes) vs. 1 byte at a time from the ROMs without being wait-stated by the internal access time of a single part (and again, recognizing the fact that smaller devices were probably much cheaper in quantity).
The Mac II has a lot of motherboard real estate and it was probably cheaper to use 27C512s than 1Mbit ROMs as in the SE (which also had a 256K ROM size, but split in 2 parts and far less motherboard space). And, for a 2x split, the ROMs also have a word access mode, I believe. So, a 4-part design favored speed and cost, but also took more work in programming and inventory management, with more chance for errors or part discrepancies. Apple had to inventory, program and assemble 4 parts, with proportional increases to BOM (Bill of Materials) and assembly risk.
But, there is a broader answer that applies more generally to Moore's Law the pace of technological innovation:
By the time the IIx arrived 1.5 years later, Apple had moved to a much more flexible ROM SIMM that significantly simplified manufacturing and assembly, included upgradability, reduced space, and had other advantages. However - it still had 4 chips to favor byte interleave speed -- just in a more compact form-factor.
ROM SIMMs were a modular design innovation that would begin to propagate forward, along with other kinds of plug-in upgrades to support Apple's rapidly-expanding product/developer ecosystem -- especially processor and video memory upgrades. Apple was on a 1.5 year release heartbeat for this format box, and the IIfx was the next in the sequence. With a ROM SIMM, Apple could just design a little board, then only have to program and assemble it once, and it was 1 part to manage in final assembly and burn-in and test vs. 4. So, they knocked 3 parts off the main build BOM and made the ROM SIMM a sub-assembly -- easier to manage and less risk at final build.
When you think about the fact that Moore's Law-driven processor advancement is doubling the number of transistors about every 1.5 to 2 years, it is not surprising to see major leaps forward in consumer designs following the same cycle.
The flat box machines were: II -> IIx -> IIfx, and remember that after the IIx, there were the mid-size baby steps with the IIcx and the IIci (faster heartbeats) and also the IIsi as a stylish new consumer model (replacing the mid-size box) after the IIfx -- all keeping pace with Moore's Law.
So, as above, the main technical advantage to splitting any ROM into 4 parts is faster, byte-based, interleaved ROM access (depending on the ROM design and interface -- many modern changes), and when making the leap from 2-ROM (word-access), space-contrained, 8Mhz 68000 -> 16Mhz 68020, byte-access interleaving and 4 parts offered various advantages. But, cost, speed, design real estate and complexity, modularity and device density are all factors that are tied to Moore's Law-driven innovation. Consider that the IIfx was 1.5 years after the IIx and the ROM doubled to 512K, then the Quadra 900 was another 1.5 years and the ROM doubled again to 1Mb -- memory devices were necessarily improving at the same pace as processor technology.
A question about why the decision for 4 ROMs seems simple. But, it also is a small lens into, or clue about, the simultaneous leap forward covering all facets of technology, including hardware, firmware and software development, not to mention the related human engagement and meteoric rise of personal computing.