-3

I have a new system with 4x512GB Samsung SSD 850 in a RAID10 array using P840/4gb fbwc RAID controller. When i download a file from network (10 Gbit/s) write speed is not that good. I tried downloading a single 1000MB file using wget:

Saving to: ‘1000mb.bin’

1000mb.bin                                       100%[=========================================================================================================>]   1000M   383MB/s   in 2.6s

Write speed 383Mbyte/s

EDIT: When downloading the same file to /dev/null I get full 10Gbit/s speed. The use case is downloading and storing files of that size.

Also when i try to write a file using dd, speed is the same with block size 512byte:

dd if=/dev/zero of=bench.bin bs=512 count=10000K
10240000+0 records in
10240000+0 records out
5242880000 bytes (5.2 GB) copied, 14.6632 s, 358 MB/s

However block size 4k gives better performance:

dd if=/dev/zero of=bench.bin bs=4k count=1000K
1024000+0 records in
1024000+0 records out
4194304000 bytes (4.2 GB) copied, 3.02447 s, 1.4 GB/s

So i tried various different settings concerning cache, SSD smart path, etc. for the raid controller. But i didn't see much difference. Any ideas how to increase write speed?

Current settings for the controller:

Smart Array P840 in Slot 1
   Bus Interface: PCI
   Slot: 1
   Serial Number: 
   Cache Serial Number: 
   RAID 6 (ADG) Status: Enabled
   Controller Status: OK
   Hardware Revision: B
   Firmware Version: 4.52
   Rebuild Priority: High
   Expand Priority: Medium
   Surface Scan Delay: 3 secs
   Surface Scan Mode: Idle
   Parallel Surface Scan Supported: Yes
   Current Parallel Surface Scan Count: 1
   Max Parallel Surface Scan Count: 16
   Queue Depth: Automatic
   Monitor and Performance Delay: 60  min
   Elevator Sort: Enabled
   Degraded Performance Optimization: Disabled
   Inconsistency Repair Policy: Disabled
   Wait for Cache Room: Disabled
   Surface Analysis Inconsistency Notification: Disabled
   Post Prompt Timeout: 15 secs
   Cache Board Present: True
   Cache Status: OK
   Cache Ratio: 10% Read / 90% Write
   Drive Write Cache: Disabled
   Total Cache Size: 4.0 GB
   Total Cache Memory Available: 3.8 GB
   No-Battery Write Cache: Disabled
   SSD Caching RAID5 WriteBack Enabled: True
   SSD Caching Version: 2
   Cache Backup Power Source: Batteries
   Battery/Capacitor Count: 1
   Battery/Capacitor Status: OK
   SATA NCQ Supported: True
   Spare Activation Mode: Activate on physical drive failure (default)
   Controller Temperature (C): 44
   Cache Module Temperature (C): 37
   Number of Ports: 2 Internal only
   Encryption: Disabled
   Express Local Encryption: False
   Driver Name: hpsa
   Driver Version: 3.4.4
   Driver Supports HP SSD Smart Path: True
   PCI Address (Domain:Bus:Device.Function): 0000:06:00.0
   Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
   Controller Mode: RAID
   Controller Mode Reboot: Not Required
   Latency Scheduler Setting: Disabled
   Current Power Mode: MaxPerformance
   Host Serial Number: 
   Sanitize Erase Supported: False
   Primary Boot Volume: logicaldrive 1 
   Secondary Boot Volume: logicaldrive 2

Physical Drives
      physicaldrive 2I:1:1 (port 2I:box 1:bay 1, Solid State SATA, 512.1 GB, OK)
      physicaldrive 2I:1:2 (port 2I:box 1:bay 2, Solid State SATA, 512.1 GB, OK)
      physicaldrive 2I:1:3 (port 2I:box 1:bay 3, Solid State SATA, 512.1 GB, OK)
      physicaldrive 2I:1:4 (port 2I:box 1:bay 4, Solid State SATA, 512.1 GB, OK)
      None attached

   Array: A
      Interface Type: Solid State SATA
      Unused Space: 0  MB (0.0%)
      Used Space: 1.9 TB (100.0%)
      Status: OK
      MultiDomain Status: OK
      Array Type: Data
      HP SSD Smart Path: disable



      Logical Drive: 1
         Size: 953.8 GB
         Fault Tolerance: 1+0
         Heads: 255
         Sectors Per Track: 32
         Cylinders: 65535
         Strip Size: 256 KB
         Full Stripe Size: 512 KB
         Status: OK
         MultiDomain Status: OK
         Caching:  Enabled
         Unique Identifier: 
         Disk Name: /dev/sda
         Mount Points: /boot 487 MB Partition Number 2, / 14.0 GB Partition Number 7
         OS Status: LOCKED
         Logical Drive Label: 
         Mirror Group 1:
            physicaldrive 2I:1:1 (port 2I:box 1:bay 1, Solid State SATA, 512.1 GB, OK)
            physicaldrive 2I:1:2 (port 2I:box 1:bay 2, Solid State SATA, 512.1 GB, OK)
         Mirror Group 2:
            physicaldrive 2I:1:3 (port 2I:box 1:bay 3, Solid State SATA, 512.1 GB, OK)
            physicaldrive 2I:1:4 (port 2I:box 1:bay 4, Solid State SATA, 512.1 GB, OK)
         Drive Type: Data
         LD Acceleration Method: Controller Cache

Any help appreciated :)


UPDATE: We have reached Performance increase by switching from ext4 to xfs. Thanks to everyone who has answered.

4
  • 2
    Why is this unexpected? Using just 512 byte blocks naturally will reduce the performance and downloading a file from the network is not a reliable performance indicator for disk write speed either, as you don't see/know what the bottlenecks are. And in any case, your file sizes are too small anyway for reliable performance measurement, as caches can interfere at that size.
    – Sven
    Apr 18, 2017 at 12:12
  • Hi Sven, thanks for your answer. Our use case is downloading and storing files of that size via network. When I download the same file to /dev/null, I get full 10Gbit/s speed (I'm going to write that into my question too). That tells me the bottleneck is the disk storage. My question is what can I do to improve storage speed for that specific use case?
    – Laord
    Apr 18, 2017 at 13:07
  • Not tempted to use supported disks?
    – Chopper3
    Apr 18, 2017 at 13:11
  • Well if we find out that it's not a problem with configuration, but the disks themselfes, we might change to other disks.
    – Laord
    Apr 18, 2017 at 13:24

1 Answer 1

0

I would recommend to increase TagQue for disks to 256+. Anyway one should understand /dev/null do not require interrupt and physical IO to/from SmartArray does. In your example 380MB/s with 512B block size generates 778,240 IOps. SA840 can provide approx. 1M IO/s in benchmark configuration (24xSSD RAID0). Recapitulation: one should NOT expect high throughput if small block is in use - change tool/setting to assure block 128kB+ are used to have performance up to 5GB/s on single SA840.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .