Jelajahi Sumber

Issue 4656 - Remove problematic language from UI/CLI/lib389

Description:  Replace "master" and "slave" with more appropriate names

relates: https://github.com/389ds/389-ds-base/issues/4656

Reviewed by: firstyear(Thanks!)
Mark Reynolds 4 tahun lalu
induk
melakukan
19eb28db4d
100 mengubah file dengan 3334 tambahan dan 3334 penghapusan
  1. 25 25
      dirsrvtests/create_test.py
  2. 1 1
      dirsrvtests/tests/data/openldap_2_389/4539/slapd.d/cn=config/cn=schema.ldif
  3. 196 196
      dirsrvtests/tests/longduration/automembers_long_test.py
  4. 1 1
      dirsrvtests/tests/stress/README
  5. 132 132
      dirsrvtests/tests/stress/reliabilty/reliab_7_5_test.py
  6. 208 208
      dirsrvtests/tests/stress/replication/mmr_01_4m-2h-4c_test.py
  7. 176 176
      dirsrvtests/tests/stress/replication/mmr_01_4m_test.py
  8. 159 159
      dirsrvtests/tests/suites/acl/acl_test.py
  9. 101 101
      dirsrvtests/tests/suites/automember_plugin/basic_test.py
  10. 1 1
      dirsrvtests/tests/suites/basic/basic_test.py
  11. 5 5
      dirsrvtests/tests/suites/clu/repl_monitor_test.py
  12. 28 28
      dirsrvtests/tests/suites/config/config_test.py
  13. 1 1
      dirsrvtests/tests/suites/config/regression_test.py
  14. 52 52
      dirsrvtests/tests/suites/ds_tools/replcheck_test.py
  15. 12 12
      dirsrvtests/tests/suites/dynamic_plugins/dynamic_plugins_test.py
  16. 2 2
      dirsrvtests/tests/suites/entryuuid/replicated_test.py
  17. 115 115
      dirsrvtests/tests/suites/fourwaymmr/fourwaymmr_test.py
  18. 49 49
      dirsrvtests/tests/suites/fractional/fractional_test.py
  19. 33 33
      dirsrvtests/tests/suites/gssapi_repl/gssapi_repl_test.py
  20. 12 12
      dirsrvtests/tests/suites/healthcheck/health_repl_test.py
  21. 6 6
      dirsrvtests/tests/suites/healthcheck/health_sync_test.py
  22. 10 10
      dirsrvtests/tests/suites/healthcheck/healthcheck_test.py
  23. 4 4
      dirsrvtests/tests/suites/lib389/idm/user_compare_m2Repl_test.py
  24. 2 2
      dirsrvtests/tests/suites/mapping_tree/be_del_and_default_naming_attr_test.py
  25. 14 14
      dirsrvtests/tests/suites/mapping_tree/referral_during_tot_init_test.py
  26. 7 7
      dirsrvtests/tests/suites/memberof_plugin/regression_test.py
  27. 31 31
      dirsrvtests/tests/suites/memory_leaks/MMR_double_free_test.py
  28. 5 5
      dirsrvtests/tests/suites/password/regression_test.py
  29. 1 1
      dirsrvtests/tests/suites/plugins/entryusn_test.py
  30. 4 4
      dirsrvtests/tests/suites/referint_plugin/rename_test.py
  31. 1 1
      dirsrvtests/tests/suites/replication/__init__.py
  32. 84 84
      dirsrvtests/tests/suites/replication/acceptance_test.py
  33. 9 9
      dirsrvtests/tests/suites/replication/cascading_test.py
  34. 2 2
      dirsrvtests/tests/suites/replication/changelog_encryption_test.py
  35. 66 66
      dirsrvtests/tests/suites/replication/changelog_test.py
  36. 30 30
      dirsrvtests/tests/suites/replication/changelog_trimming_test.py
  37. 9 9
      dirsrvtests/tests/suites/replication/cleanallruv_max_tasks_test.py
  38. 235 235
      dirsrvtests/tests/suites/replication/cleanallruv_test.py
  39. 56 56
      dirsrvtests/tests/suites/replication/conflict_resolve_test.py
  40. 4 4
      dirsrvtests/tests/suites/replication/conftest.py
  41. 9 9
      dirsrvtests/tests/suites/replication/encryption_cl5_test.py
  42. 6 6
      dirsrvtests/tests/suites/replication/multiple_changelogs_test.py
  43. 8 8
      dirsrvtests/tests/suites/replication/regression_i2_test.py
  44. 88 88
      dirsrvtests/tests/suites/replication/regression_m2_test.py
  45. 35 35
      dirsrvtests/tests/suites/replication/regression_m2c2_test.py
  46. 9 9
      dirsrvtests/tests/suites/replication/regression_m3_test.py
  47. 17 17
      dirsrvtests/tests/suites/replication/repl_agmt_bootstrap_test.py
  48. 13 13
      dirsrvtests/tests/suites/replication/ruvstore_test.py
  49. 25 25
      dirsrvtests/tests/suites/replication/series_of_repl_bugs_test.py
  50. 14 14
      dirsrvtests/tests/suites/replication/single_master_test.py
  51. 10 10
      dirsrvtests/tests/suites/replication/tls_client_auth_repl_test.py
  52. 3 3
      dirsrvtests/tests/suites/replication/tombstone_fixup_test.py
  53. 1 1
      dirsrvtests/tests/suites/replication/tombstone_test.py
  54. 28 28
      dirsrvtests/tests/suites/replication/wait_for_async_feature_test.py
  55. 1 1
      dirsrvtests/tests/suites/rewriters/adfilter_test.py
  56. 25 25
      dirsrvtests/tests/suites/sasl/regression_test.py
  57. 115 115
      dirsrvtests/tests/suites/schema/schema_replication_test.py
  58. 16 16
      dirsrvtests/tests/suites/state/mmt_state_test.py
  59. 2 2
      dirsrvtests/tests/suites/syncrepl_plugin/basic_test.py
  60. 11 11
      dirsrvtests/tests/suites/vlv/regression_test.py
  61. 18 18
      dirsrvtests/tests/tickets/ticket47573_test.py
  62. 14 14
      dirsrvtests/tests/tickets/ticket47619_test.py
  63. 73 73
      dirsrvtests/tests/tickets/ticket47653MMR_test.py
  64. 54 54
      dirsrvtests/tests/tickets/ticket47676_test.py
  65. 73 73
      dirsrvtests/tests/tickets/ticket47721_test.py
  66. 14 14
      dirsrvtests/tests/tickets/ticket47781_test.py
  67. 62 62
      dirsrvtests/tests/tickets/ticket47787_test.py
  68. 63 63
      dirsrvtests/tests/tickets/ticket47869MMR_test.py
  69. 16 16
      dirsrvtests/tests/tickets/ticket47871_test.py
  70. 99 99
      dirsrvtests/tests/tickets/ticket47988_test.py
  71. 43 43
      dirsrvtests/tests/tickets/ticket48266_test.py
  72. 12 12
      dirsrvtests/tests/tickets/ticket48325_test.py
  73. 21 21
      dirsrvtests/tests/tickets/ticket48342_test.py
  74. 22 22
      dirsrvtests/tests/tickets/ticket48362_test.py
  75. 2 2
      dirsrvtests/tests/tickets/ticket48759_test.py
  76. 31 31
      dirsrvtests/tests/tickets/ticket48784_test.py
  77. 9 9
      dirsrvtests/tests/tickets/ticket48799_test.py
  78. 10 10
      dirsrvtests/tests/tickets/ticket48916_test.py
  79. 39 39
      dirsrvtests/tests/tickets/ticket48944_test.py
  80. 4 4
      dirsrvtests/tests/tickets/ticket49008_test.py
  81. 4 4
      dirsrvtests/tests/tickets/ticket49020_test.py
  82. 26 26
      dirsrvtests/tests/tickets/ticket49073_test.py
  83. 13 13
      dirsrvtests/tests/tickets/ticket49121_test.py
  84. 36 36
      dirsrvtests/tests/tickets/ticket49180_test.py
  85. 6 6
      dirsrvtests/tests/tickets/ticket49287_test.py
  86. 1 1
      dirsrvtests/tests/tickets/ticket49386_test.py
  87. 2 2
      dirsrvtests/tests/tickets/ticket49412_test.py
  88. 4 4
      dirsrvtests/tests/tickets/ticket49460_test.py
  89. 4 4
      dirsrvtests/tests/tickets/ticket49463_test.py
  90. 1 1
      dirsrvtests/tests/tickets/ticket49471_test.py
  91. 1 1
      dirsrvtests/tests/tickets/ticket49540_test.py
  92. 2 2
      dirsrvtests/tests/tickets/ticket49623_2_test.py
  93. 173 173
      dirsrvtests/tests/tickets/ticket49658_test.py
  94. 1 1
      dirsrvtests/tests/tickets/ticket50078_test.py
  95. 1 1
      dirsrvtests/tests/tickets/ticket50232_test.py
  96. 5 5
      ldap/servers/slapd/tools/ldclt/ldapfct.c
  97. 30 30
      ldap/servers/slapd/tools/ldclt/ldclt.c
  98. 9 9
      ldap/servers/slapd/tools/ldclt/ldclt.h
  99. 8 8
      ldap/servers/slapd/tools/ldclt/ldclt.man
  100. 3 3
      ldap/servers/slapd/tools/ldclt/ldclt.use

+ 25 - 25
dirsrvtests/create_test.py

@@ -29,14 +29,14 @@ def displayUsage():
     print ('\nUsage:\ncreate_ticket.py -t|--ticket <ticket number> ' +
            '-s|--suite <suite name> ' +
            '[ i|--instances <number of standalone instances> ' +
-           '[ -m|--masters <number of masters> -h|--hubs <number of hubs> ' +
+           '[ -m|--suppliers <number of suppliers> -h|--hubs <number of hubs> ' +
            '-c|--consumers <number of consumers> ] -o|--outputfile ]\n')
     print ('If only "-t" is provided then a single standalone instance is ' +
            'created. Or you can create a test suite script using ' +
            '"-s|--suite" instead of using "-t|--ticket". The "-i" option ' +
            'can add mulitple standalone instances (maximum 99). However, you' +
            ' can not mix "-i" with the replication options (-m, -h , -c).  ' +
-           'There is a maximum of 99 masters, 99 hubs, and 99 consumers.')
+           'There is a maximum of 99 suppliers, 99 hubs, and 99 consumers.')
     print('If "-s|--suite" option was chosen, then no topology would be added ' +
           'to the test script. You can find predefined fixtures in the lib389/topologies.py ' +
           'and use them or write a new one if you have a special case.')
@@ -59,7 +59,7 @@ def writeFinalizer():
     TEST.write('\n\n')
 
 
-def get_existing_topologies(inst, masters, hubs, consumers):
+def get_existing_topologies(inst, suppliers, hubs, consumers):
     """Check if the requested topology exists"""
     setup_text = ""
 
@@ -72,14 +72,14 @@ def get_existing_topologies(inst, masters, hubs, consumers):
             setup_text = "{} Standalone Instances".format(inst)
     else:
         i = ''
-    if masters:
-        ms = 'm{}'.format(masters)
+    if suppliers:
+        ms = 'm{}'.format(suppliers)
         if len(setup_text) > 0:
             setup_text += ", "
-        if masters == 1:
-            setup_text += "Master Instance"
+        if suppliers == 1:
+            setup_text += "Supplier Instance"
         else:
-            setup_text += "{} Master Instances".format(masters)
+            setup_text += "{} Supplier Instances".format(suppliers)
     else:
         ms = ''
     if hubs:
@@ -141,7 +141,7 @@ if len(sys.argv) > 0:
     parser.add_option('-t', '--ticket', dest='ticket', default=None)
     parser.add_option('-s', '--suite', dest='suite', default=None)
     parser.add_option('-i', '--instances', dest='inst', default='0')
-    parser.add_option('-m', '--masters', dest='masters', default='0')
+    parser.add_option('-m', '--suppliers', dest='suppliers', default='0')
     parser.add_option('-h', '--hubs', dest='hubs', default='0')
     parser.add_option('-c', '--consumers', dest='consumers', default='0')
     parser.add_option('-o', '--outputfile', dest='filename', default=None)
@@ -161,16 +161,16 @@ if len(sys.argv) > 0:
               'but not both.')
         displayUsage()
 
-    if int(args.masters) == 0:
+    if int(args.suppliers) == 0:
         if int(args.hubs) > 0 or int(args.consumers) > 0:
-            print('You must use "-m|--masters" if you want to have hubs ' +
+            print('You must use "-m|--suppliers" if you want to have hubs ' +
                   'and/or consumers')
             displayUsage()
 
-    if not args.masters.isdigit() or \
-           int(args.masters) > 99 or \
-           int(args.masters) < 0:
-        print('Invalid value for "--masters", it must be a number and it can' +
+    if not args.suppliers.isdigit() or \
+           int(args.suppliers) > 99 or \
+           int(args.suppliers) < 0:
+        print('Invalid value for "--suppliers", it must be a number and it can' +
               ' not be greater than 99')
         displayUsage()
 
@@ -194,7 +194,7 @@ if len(sys.argv) > 0:
                   'greater than 0 and not greater than 99')
             displayUsage()
         if int(args.inst) > 0:
-            if int(args.masters) > 0 or \
+            if int(args.suppliers) > 0 or \
                             int(args.hubs) > 0 or \
                             int(args.consumers) > 0:
                 print('You can not mix "--instances" with replication.')
@@ -204,16 +204,16 @@ if len(sys.argv) > 0:
     ticket = args.ticket
     suite = args.suite
 
-    if args.inst == '0' and args.masters == '0' and args.hubs == '0' \
+    if args.inst == '0' and args.suppliers == '0' and args.hubs == '0' \
        and args.consumers == '0':
         instances = 1
         my_topology = [True, 'topology_st', "Standalone Instance"]
     else:
         instances = int(args.inst)
-        masters = int(args.masters)
+        suppliers = int(args.suppliers)
         hubs = int(args.hubs)
         consumers = int(args.consumers)
-        my_topology = get_existing_topologies(instances, masters, hubs, consumers)
+        my_topology = get_existing_topologies(instances, suppliers, hubs, consumers)
     filename = args.filename
     setup_text = my_topology[2]
 
@@ -245,8 +245,8 @@ if len(sys.argv) > 0:
     if not my_topology[0]:
         # Write the replication or standalone classes
         topologies_str = ""
-        if masters > 0:
-            topologies_str += " {} masters".format(masters)
+        if suppliers > 0:
+            topologies_str += " {} suppliers".format(suppliers)
         if hubs > 0:
             topologies_str += " {} hubs".format(hubs)
         if consumers > 0:
@@ -259,8 +259,8 @@ if len(sys.argv) > 0:
         TEST.write('def topo(request):\n')
         TEST.write('    """Create a topology with{}"""\n\n'.format(topologies_str))
         TEST.write('    topology = create_topology({\n')
-        if masters > 0:
-            TEST.write('        ReplicaRole.MASTER: {},\n'.format(masters))
+        if suppliers > 0:
+            TEST.write('        ReplicaRole.SUPPLIER: {},\n'.format(suppliers))
         if hubs > 0:
             TEST.write('        ReplicaRole.HUB: {},\n'.format(hubs))
         if consumers > 0:
@@ -270,7 +270,7 @@ if len(sys.argv) > 0:
         TEST.write('        })\n')
 
         TEST.write('    # You can write replica test here. Just uncomment the block and choose instances\n')
-        TEST.write('    # replicas = Replicas(topology.ms["master1"])\n')
+        TEST.write('    # replicas = Replicas(topology.ms["supplier1"])\n')
         TEST.write('    # replicas.test(DEFAULT_SUFFIX, topology.cs["consumer1"])\n')
 
         writeFinalizer()
@@ -298,7 +298,7 @@ if len(sys.argv) > 0:
     TEST.write('    # please, write additional fixture for that (including finalizer).\n'
                '    # Topology for suites are predefined in lib389/topologies.py.\n\n')
     TEST.write('    # If you need host, port or any other data about instance,\n')
-    TEST.write('    # Please, use the instance object attributes for that (for example, topo.ms["master1"].serverid)\n\n\n')
+    TEST.write('    # Please, use the instance object attributes for that (for example, topo.ms["supplier1"].serverid)\n\n\n')
 
     # Write the main function
     TEST.write("if __name__ == '__main__':\n")

+ 1 - 1
dirsrvtests/tests/data/openldap_2_389/4539/slapd.d/cn=config/cn=schema.ldif

@@ -66,7 +66,7 @@ olcLdapSyntaxes: ( 1.3.6.1.4.1.1466.115.121.1.26 DESC 'IA5 String' )
 olcLdapSyntaxes: ( 1.3.6.1.4.1.1466.115.121.1.27 DESC 'Integer' )
 olcLdapSyntaxes: ( 1.3.6.1.4.1.1466.115.121.1.28 DESC 'JPEG' X-NOT-HUMAN-REA
  DABLE 'TRUE' )
-olcLdapSyntaxes: ( 1.3.6.1.4.1.1466.115.121.1.29 DESC 'Master And Shadow Acc
+olcLdapSyntaxes: ( 1.3.6.1.4.1.1466.115.121.1.29 DESC 'Supplier And Shadow Acc
  ess Points' )
 olcLdapSyntaxes: ( 1.3.6.1.4.1.1466.115.121.1.30 DESC 'Matching Rule Descrip
  tion' )

+ 196 - 196
dirsrvtests/tests/longduration/automembers_long_test.py

@@ -43,21 +43,21 @@ def _create_entries(topo_m4):
     """
     Will act as module .Will set up required user/entries for the test cases.
     """
-    for instance in [topo_m4.ms['master1'], topo_m4.ms['master2'],
-                     topo_m4.ms['master3'], topo_m4.ms['master4']]:
+    for instance in [topo_m4.ms['supplier1'], topo_m4.ms['supplier2'],
+                     topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
         assert instance.status()
 
     for org in ['autouserGroups', 'Employees', 'TaskEmployees']:
-        OrganizationalUnits(topo_m4.ms['master1'], DEFAULT_SUFFIX).create(properties={'ou': org})
+        OrganizationalUnits(topo_m4.ms['supplier1'], DEFAULT_SUFFIX).create(properties={'ou': org})
 
-    Backends(topo_m4.ms['master1']).create(properties={
+    Backends(topo_m4.ms['supplier1']).create(properties={
         'cn': 'SubAutoMembers',
         'nsslapd-suffix': SUBSUFFIX,
         'nsslapd-CACHE_SIZE': CACHE_SIZE,
         'nsslapd-CACHEMEM_SIZE': CACHEMEM_SIZE
     })
 
-    Domain(topo_m4.ms['master1'], SUBSUFFIX).create(properties={
+    Domain(topo_m4.ms['supplier1'], SUBSUFFIX).create(properties={
         'dc': SUBSUFFIX.split('=')[1].split(',')[0],
         'aci': [
             f'(targetattr="userPassword")(version 3.0;aci "Replication Manager Access";'
@@ -96,7 +96,7 @@ def _create_entries(topo_m4):
                       ("CN=testuserGroups,{}".format(DEFAULT_SUFFIX), 'TestDef3'),
                       ("CN=testuserGroups,{}".format(DEFAULT_SUFFIX), 'TestDef4'),
                       ("CN=testuserGroups,{}".format(DEFAULT_SUFFIX), 'TestDef5')]:
-        Groups(topo_m4.ms['master1'], suff, rdn=None).create(properties={'cn': grp})
+        Groups(topo_m4.ms['supplier1'], suff, rdn=None).create(properties={'cn': grp})
 
     for suff, grp, gid in [(SUBSUFFIX, 'SubDef1', '111'),
                            (SUBSUFFIX, 'SubDef2', '222'),
@@ -105,17 +105,17 @@ def _create_entries(topo_m4):
                            (SUBSUFFIX, 'SubDef5', '555'),
                            ('cn=subsuffGroups,{}'.format(SUBSUFFIX), 'Managers', '666'),
                            ('cn=subsuffGroups,{}'.format(SUBSUFFIX), 'Contractors', '999')]:
-        PosixGroups(topo_m4.ms['master1'], suff, rdn=None).create(properties={
+        PosixGroups(topo_m4.ms['supplier1'], suff, rdn=None).create(properties={
             'cn': grp,
             'gidNumber': gid})
 
-    for master in [topo_m4.ms['master1'], topo_m4.ms['master2'],
-                   topo_m4.ms['master3'], topo_m4.ms['master4']]:
-        AutoMembershipPlugin(master).add("nsslapd-pluginConfigArea",
+    for supplier in [topo_m4.ms['supplier1'], topo_m4.ms['supplier2'],
+                   topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+        AutoMembershipPlugin(supplier).add("nsslapd-pluginConfigArea",
                                          "cn=autoMembersPlugin,{}".format(DEFAULT_SUFFIX))
-        MemberOfPlugin(master).enable()
+        MemberOfPlugin(supplier).enable()
 
-    automembers = AutoMembershipDefinitions(topo_m4.ms['master1'],
+    automembers = AutoMembershipDefinitions(topo_m4.ms['supplier1'],
                                             f'cn=autoMembersPlugin,{DEFAULT_SUFFIX}')
     automember1 = automembers.create(properties={
         'cn': 'replsubGroups',
@@ -129,7 +129,7 @@ def _create_entries(topo_m4):
         'autoMemberGroupingAttr': 'member:dn'
     })
 
-    automembers = AutoMembershipRegexRules(topo_m4.ms['master1'], automember1.dn)
+    automembers = AutoMembershipRegexRules(topo_m4.ms['supplier1'], automember1.dn)
     automembers.create(properties={
         'cn': 'Managers',
         'description': f'Group placement for Managers',
@@ -172,8 +172,8 @@ def _create_entries(topo_m4):
                                      'gidNumber=^[7-9]00$',
                                      'nsAdminGroupName=^Inter'],
     })
-    for instance in [topo_m4.ms['master1'], topo_m4.ms['master2'],
-                     topo_m4.ms['master3'], topo_m4.ms['master4']]:
+    for instance in [topo_m4.ms['supplier1'], topo_m4.ms['supplier2'],
+                     topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
         instance.restart()
 
 
@@ -181,18 +181,18 @@ def delete_users_and_wait(topo_m4, automem_scope):
     """
     Deletes entries after test and waits for replication.
     """
-    for user in nsAdminGroups(topo_m4.ms['master1'], automem_scope, rdn=None).list():
+    for user in nsAdminGroups(topo_m4.ms['supplier1'], automem_scope, rdn=None).list():
         user.delete()
-    for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-        ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                master, timeout=30000)
+    for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+        ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                supplier, timeout=30000)
 
 
 def create_entry(topo_m4, user_id, suffix, uid_no, gid_no, role_usr):
     """
     Will create entries with nsAdminGroup objectclass
     """
-    user = nsAdminGroups(topo_m4.ms['master1'], suffix, rdn=None).create(properties={
+    user = nsAdminGroups(topo_m4.ms['supplier1'], suffix, rdn=None).create(properties={
         'cn': user_id,
         'sn': user_id,
         'uid': user_id,
@@ -214,10 +214,10 @@ def test_adding_300_user(topo_m4, _create_entries):
     Adding 300 user entries matching the inclusive regex rules for
     all targetted groups at M1 and checking the same created in M2 & M3
     :id: fcd867bc-be57-11e9-9842-8c16451d917b
-    :setup: Instance with 4 masters
+    :setup: Instance with 4 suppliers
     :steps:
-        1. Add 300 user entries matching the inclusive regex rules at topo_m4.ms['master1']
-        2. Check the same created in rest masters
+        1. Add 300 user entries matching the inclusive regex rules at topo_m4.ms['supplier1']
+        2. Check the same created in rest suppliers
     :expected results:
         1. Pass
         2. Pass
@@ -232,17 +232,17 @@ def test_adding_300_user(topo_m4, _create_entries):
         create_entry(topo_m4, f'{user_rdn}{number}', automem_scope, '5795', '5693', 'Contractor')
     try:
         # Check  to sync the entries
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for instance, grp in [(topo_m4.ms['master2'], 'Managers'),
-                              (topo_m4.ms['master3'], 'Contractors'),
-                              (topo_m4.ms['master4'], 'Interns')]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for instance, grp in [(topo_m4.ms['supplier2'], 'Managers'),
+                              (topo_m4.ms['supplier3'], 'Contractors'),
+                              (topo_m4.ms['supplier4'], 'Interns')]:
             assert len(nsAdminGroup(
                 instance, f'cn={grp},{grp_container}').get_attr_vals_utf8('member')) == 300
         for grp in [default_group1, default_group2]:
-            assert not Group(topo_m4.ms['master4'], grp).get_attr_vals_utf8('member')
-            assert not Group(topo_m4.ms['master3'], grp).get_attr_vals_utf8('member')
+            assert not Group(topo_m4.ms['supplier4'], grp).get_attr_vals_utf8('member')
+            assert not Group(topo_m4.ms['supplier3'], grp).get_attr_vals_utf8('member')
 
     finally:
         delete_users_and_wait(topo_m4, automem_scope)
@@ -253,11 +253,11 @@ def test_adding_1000_users(topo_m4, _create_entries):
     Adding 1000 users matching inclusive regex for Managers/Contractors
     and exclusive regex for Interns/Visitors
     :id: f641e612-be57-11e9-94e6-8c16451d917b
-    :setup: Instance with 4 masters
+    :setup: Instance with 4 suppliers
     :steps:
         1. Add 1000 user entries matching the inclusive/exclusive
-        regex rules at topo_m4.ms['master1']
-        2. Check the same created in rest masters
+        regex rules at topo_m4.ms['supplier1']
+        2. Check the same created in rest suppliers
     :expected results:
         1. Pass
         2. Pass
@@ -271,21 +271,21 @@ def test_adding_1000_users(topo_m4, _create_entries):
         create_entry(topo_m4, f'automemusrs{number}', automem_scope, '799', '5693', 'Manager')
     try:
         # Check  to sync the entries
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for instance, grp in [(topo_m4.ms['master1'], 'Managers'),
-                              (topo_m4.ms['master3'], 'Contractors')]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for instance, grp in [(topo_m4.ms['supplier1'], 'Managers'),
+                              (topo_m4.ms['supplier3'], 'Contractors')]:
             assert len(nsAdminGroup(
                 instance, "cn={},{}".format(grp,
                                             grp_container)).get_attr_vals_utf8('member')) == 1000
-        for instance, grp in [(topo_m4.ms['master2'], 'Interns'),
-                              (topo_m4.ms['master4'], 'Visitors')]:
+        for instance, grp in [(topo_m4.ms['supplier2'], 'Interns'),
+                              (topo_m4.ms['supplier4'], 'Visitors')]:
             assert not nsAdminGroup(
                 instance, "cn={},{}".format(grp, grp_container)).get_attr_vals_utf8('member')
         for grp in [default_group1, default_group2]:
-            assert not Group(topo_m4.ms['master2'], grp).get_attr_vals_utf8('member')
-            assert not Group(topo_m4.ms['master3'], grp).get_attr_vals_utf8('member')
+            assert not Group(topo_m4.ms['supplier2'], grp).get_attr_vals_utf8('member')
+            assert not Group(topo_m4.ms['supplier3'], grp).get_attr_vals_utf8('member')
     finally:
         delete_users_and_wait(topo_m4, automem_scope)
 
@@ -294,11 +294,11 @@ def test_adding_3000_users(topo_m4, _create_entries):
     """
     Adding 3000 users matching all inclusive regex rules and no matching exclusive regex rules
     :id: ee54576e-be57-11e9-b536-8c16451d917b
-    :setup: Instance with 4 masters
+    :setup: Instance with 4 suppliers
     :steps:
         1. Add 3000 user entries matching the inclusive/exclusive regex
-        rules at topo_m4.ms['master1']
-        2. Check the same created in rest masters
+        rules at topo_m4.ms['supplier1']
+        2. Check the same created in rest suppliers
     :expected results:
         1. Pass
         2. Pass
@@ -311,21 +311,21 @@ def test_adding_3000_users(topo_m4, _create_entries):
     for number in range(3000):
         create_entry(topo_m4, f'automemusrs{number}', automem_scope, '5995', '5693', 'Manager')
     try:
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for instance, grp in [(topo_m4.ms['master1'], 'Managers'),
-                              (topo_m4.ms['master3'], 'Contractors'),
-                              (topo_m4.ms['master2'], 'Interns'),
-                              (topo_m4.ms['master4'], 'Visitors')
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for instance, grp in [(topo_m4.ms['supplier1'], 'Managers'),
+                              (topo_m4.ms['supplier3'], 'Contractors'),
+                              (topo_m4.ms['supplier2'], 'Interns'),
+                              (topo_m4.ms['supplier4'], 'Visitors')
                               ]:
             assert len(
                 nsAdminGroup(instance,
                              "cn={},{}".format(grp,
                                                grp_container)).get_attr_vals_utf8('member')) == 3000
         for grp in [default_group1, default_group2]:
-            assert not Group(topo_m4.ms['master2'], grp).get_attr_vals_utf8('member')
-            assert not Group(topo_m4.ms['master3'], grp).get_attr_vals_utf8('member')
+            assert not Group(topo_m4.ms['supplier2'], grp).get_attr_vals_utf8('member')
+            assert not Group(topo_m4.ms['supplier3'], grp).get_attr_vals_utf8('member')
     finally:
         delete_users_and_wait(topo_m4, automem_scope)
 
@@ -334,11 +334,11 @@ def test_3000_users_matching_all_exclusive_regex(topo_m4, _create_entries):
     """
     Adding 3000 users matching all exclusive regex rules and no matching inclusive regex rules
     :id: e789331e-be57-11e9-b298-8c16451d917b
-    :setup: Instance with 4 masters
+    :setup: Instance with 4 suppliers
     :steps:
         1. Add 3000 user entries matching the inclusive/exclusive regex
-        rules at topo_m4.ms['master1']
-        2. Check the same created in rest masters
+        rules at topo_m4.ms['supplier1']
+        2. Check the same created in rest suppliers
     :expected results:
         1. Pass
         2. Pass
@@ -352,17 +352,17 @@ def test_3000_users_matching_all_exclusive_regex(topo_m4, _create_entries):
     for number in range(3000):
         create_entry(topo_m4, f'automemusrs{number}', automem_scope, '399', '700', 'Manager')
     try:
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-
-        for instance, grp in [(topo_m4.ms['master1'], default_group4),
-                              (topo_m4.ms['master2'], default_group1),
-                              (topo_m4.ms['master3'], default_group2),
-                              (topo_m4.ms['master4'], default_group2)]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+
+        for instance, grp in [(topo_m4.ms['supplier1'], default_group4),
+                              (topo_m4.ms['supplier2'], default_group1),
+                              (topo_m4.ms['supplier3'], default_group2),
+                              (topo_m4.ms['supplier4'], default_group2)]:
             assert len(nsAdminGroup(instance, grp).get_attr_vals_utf8('member')) == 3000
-        for grp, instance in [('Managers', topo_m4.ms['master3']),
-                              ('Contractors', topo_m4.ms['master2'])]:
+        for grp, instance in [('Managers', topo_m4.ms['supplier3']),
+                              ('Contractors', topo_m4.ms['supplier2'])]:
             assert not nsAdminGroup(
                 instance, "cn={},{}".format(grp, grp_container)).get_attr_vals_utf8('member')
 
@@ -374,11 +374,11 @@ def test_no_matching_inclusive_regex_rules(topo_m4, _create_entries):
     """
     Adding 3000 users matching all exclusive regex rules and no matching inclusive regex rules
     :id: e0cc0e16-be57-11e9-9c0f-8c16451d917b
-    :setup: Instance with 4 masters
+    :setup: Instance with 4 suppliers
     :steps:
         1. Add 3000 user entries matching the inclusive/exclusive regex
-        rules at topo_m4.ms['master1']
-        2. Check the same created in rest masters
+        rules at topo_m4.ms['supplier1']
+        2. Check the same created in rest suppliers
     :expected results:
         1. Pass
         2. Pass
@@ -390,16 +390,16 @@ def test_no_matching_inclusive_regex_rules(topo_m4, _create_entries):
     for number in range(3000):
         create_entry(topo_m4, f'automemusrs{number}', automem_scope, '399', '700', 'Manager')
     try:
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for instance, grp in [(topo_m4.ms['master1'], "cn=SubDef4,{}".format(DEFAULT_SUFFIX)),
-                              (topo_m4.ms['master2'], default_group1),
-                              (topo_m4.ms['master3'], "cn=SubDef2,{}".format(DEFAULT_SUFFIX)),
-                              (topo_m4.ms['master4'], "cn=SubDef3,{}".format(DEFAULT_SUFFIX))]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for instance, grp in [(topo_m4.ms['supplier1'], "cn=SubDef4,{}".format(DEFAULT_SUFFIX)),
+                              (topo_m4.ms['supplier2'], default_group1),
+                              (topo_m4.ms['supplier3'], "cn=SubDef2,{}".format(DEFAULT_SUFFIX)),
+                              (topo_m4.ms['supplier4'], "cn=SubDef3,{}".format(DEFAULT_SUFFIX))]:
             assert len(nsAdminGroup(instance, grp).get_attr_vals_utf8('member')) == 3000
-        for grp, instance in [('Managers', topo_m4.ms['master3']),
-                              ('Contractors', topo_m4.ms['master2'])]:
+        for grp, instance in [('Managers', topo_m4.ms['supplier3']),
+                              ('Contractors', topo_m4.ms['supplier2'])]:
             assert not nsAdminGroup(
                 instance, "cn={},{}".format(grp, grp_container)).get_attr_vals_utf8('member')
     finally:
@@ -411,14 +411,14 @@ def test_adding_deleting_and_re_adding_the_same_3000(topo_m4, _create_entries):
     Adding, Deleting and re-adding the same 3000 users matching all
     exclusive regex rules and no matching inclusive regex rules
     :id: d939247c-be57-11e9-825d-8c16451d917b
-    :setup: Instance with 4 masters
+    :setup: Instance with 4 suppliers
     :steps:
         1. Add 3000 user entries matching the inclusive/exclusive regex
-        rules at topo_m4.ms['master1']
-        2. Check the same created in rest masters
+        rules at topo_m4.ms['supplier1']
+        2. Check the same created in rest suppliers
         3. Delete 3000 users
         4. Again add 3000 users
-        5. Check the same created in rest masters
+        5. Check the same created in rest suppliers
     :expected results:
         1. Pass
         2. Pass
@@ -433,29 +433,29 @@ def test_adding_deleting_and_re_adding_the_same_3000(topo_m4, _create_entries):
     for number in range(3000):
         create_entry(topo_m4, f'automemusrs{number}', automem_scope, '399', '700', 'Manager')
     try:
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        assert len(nsAdminGroup(topo_m4.ms['master2'],
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        assert len(nsAdminGroup(topo_m4.ms['supplier2'],
                                 default_group1).get_attr_vals_utf8('member')) == 3000
         # Deleting
-        for user in nsAdminGroups(topo_m4.ms['master2'], automem_scope, rdn=None).list():
+        for user in nsAdminGroups(topo_m4.ms['supplier2'], automem_scope, rdn=None).list():
             user.delete()
-        for master in [topo_m4.ms['master1'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master2'],
-                                                                    master, timeout=30000)
+        for supplier in [topo_m4.ms['supplier1'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier2'],
+                                                                    supplier, timeout=30000)
         # Again adding
         for number in range(3000):
             create_entry(topo_m4, f'automemusrs{number}', automem_scope, '399', '700', 'Manager')
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for instance, grp in [(topo_m4.ms['master1'], "cn=SubDef4,{}".format(DEFAULT_SUFFIX)),
-                              (topo_m4.ms['master3'], "cn=SubDef5,{}".format(DEFAULT_SUFFIX)),
-                              (topo_m4.ms['master4'], "cn=SubDef3,{}".format(DEFAULT_SUFFIX))]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for instance, grp in [(topo_m4.ms['supplier1'], "cn=SubDef4,{}".format(DEFAULT_SUFFIX)),
+                              (topo_m4.ms['supplier3'], "cn=SubDef5,{}".format(DEFAULT_SUFFIX)),
+                              (topo_m4.ms['supplier4'], "cn=SubDef3,{}".format(DEFAULT_SUFFIX))]:
             assert len(nsAdminGroup(instance, grp).get_attr_vals_utf8('member')) == 3000
-        for grp, instance in [('Interns', topo_m4.ms['master3']),
-                              ('Contractors', topo_m4.ms['master2'])]:
+        for grp, instance in [('Interns', topo_m4.ms['supplier3']),
+                              ('Contractors', topo_m4.ms['supplier2'])]:
             assert not nsAdminGroup(
                 instance, "cn={},{}".format(grp, grp_container)).get_attr_vals_utf8('member')
     finally:
@@ -467,14 +467,14 @@ def test_re_adding_the_same_3000_users(topo_m4, _create_entries):
     Adding, Deleting and re-adding the same 3000 users matching all inclusive
     regex rules and no matching exclusive regex rules
     :id: d2f5f112-be57-11e9-b164-8c16451d917b
-    :setup: Instance with 4 masters
+    :setup: Instance with 4 suppliers
     :steps:
         1. Add 3000 user entries matching the inclusive/exclusive regex
-        rules at topo_m4.ms['master1']
-        2. Check the same created in rest masters
+        rules at topo_m4.ms['supplier1']
+        2. Check the same created in rest suppliers
         3. Delete 3000 users
         4. Again add 3000 users
-        5. Check the same created in rest masters
+        5. Check the same created in rest suppliers
     :expected results:
         1. Pass
         2. Pass
@@ -490,11 +490,11 @@ def test_re_adding_the_same_3000_users(topo_m4, _create_entries):
     for number in range(3000):
         create_entry(topo_m4, f'automemusrs{number}', automem_scope, '5995', '5693', 'Manager')
     try:
-        for master in [topo_m4.ms['master1'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master2'],
-                                                                    master, timeout=30000)
+        for supplier in [topo_m4.ms['supplier1'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier2'],
+                                                                    supplier, timeout=30000)
         assert len(nsAdminGroup(
-            topo_m4.ms['master2'],
+            topo_m4.ms['supplier2'],
             f'cn=Contractors,{grp_container}').get_attr_vals_utf8('member')) == 3000
         # Deleting
         delete_users_and_wait(topo_m4, automem_scope)
@@ -502,16 +502,16 @@ def test_re_adding_the_same_3000_users(topo_m4, _create_entries):
         # re-adding
         for number in range(3000):
             create_entry(topo_m4, f'automemusrs{number}', automem_scope, '5995', '5693', 'Manager')
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for instance, grp in [(topo_m4.ms['master1'], "cn=Managers,{}".format(grp_container)),
-                              (topo_m4.ms['master3'], "cn=Contractors,{}".format(grp_container)),
-                              (topo_m4.ms['master4'], "cn=Visitors,{}".format(grp_container)),
-                              (topo_m4.ms['master2'], "cn=Interns,{}".format(grp_container))]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for instance, grp in [(topo_m4.ms['supplier1'], "cn=Managers,{}".format(grp_container)),
+                              (topo_m4.ms['supplier3'], "cn=Contractors,{}".format(grp_container)),
+                              (topo_m4.ms['supplier4'], "cn=Visitors,{}".format(grp_container)),
+                              (topo_m4.ms['supplier2'], "cn=Interns,{}".format(grp_container))]:
             assert len(nsAdminGroup(instance, grp).get_attr_vals_utf8('member')) == 3000
-        for grp, instance in [(default_group2, topo_m4.ms['master4']),
-                              (default_group1, topo_m4.ms['master3'])]:
+        for grp, instance in [(default_group2, topo_m4.ms['supplier4']),
+                              (default_group1, topo_m4.ms['supplier3'])]:
             assert not nsAdminGroup(instance, grp).get_attr_vals_utf8('member')
     finally:
         delete_users_and_wait(topo_m4, automem_scope)
@@ -522,14 +522,14 @@ def test_users_with_different_uid_and_gid_nos(topo_m4, _create_entries):
     Adding, Deleting and re-adding the same 3000 users with
     different uid and gid nos, with different inclusive/exclusive matching regex rules
     :id: cc595a1a-be57-11e9-b053-8c16451d917b
-    :setup: Instance with 4 masters
+    :setup: Instance with 4 suppliers
     :steps:
         1. Add 3000 user entries matching the inclusive/exclusive regex
-        rules at topo_m4.ms['master1']
-        2. Check the same created in rest masters
+        rules at topo_m4.ms['supplier1']
+        2. Check the same created in rest suppliers
         3. Delete 3000 users
         4. Again add 3000 users
-        5. Check the same created in rest masters
+        5. Check the same created in rest suppliers
     :expected results:
         1. Pass
         2. Pass
@@ -545,39 +545,39 @@ def test_users_with_different_uid_and_gid_nos(topo_m4, _create_entries):
     for number in range(3000):
         create_entry(topo_m4, f'automemusrs{number}', automem_scope, '3994', '5695', 'OnDeputation')
     try:
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for intstance, grp in [(topo_m4.ms['master2'], default_group1),
-                               (topo_m4.ms['master3'], default_group2)]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for intstance, grp in [(topo_m4.ms['supplier2'], default_group1),
+                               (topo_m4.ms['supplier3'], default_group2)]:
             assert len(nsAdminGroup(intstance, grp).get_attr_vals_utf8('member')) == 3000
-        for grp, instance in [('Contractors', topo_m4.ms['master3']),
-                              ('Managers', topo_m4.ms['master1'])]:
+        for grp, instance in [('Contractors', topo_m4.ms['supplier3']),
+                              ('Managers', topo_m4.ms['supplier1'])]:
             assert not nsAdminGroup(
                 instance, "cn={},{}".format(grp, grp_container)).get_attr_vals_utf8('member')
         # Deleting
-        for user in nsAdminGroups(topo_m4.ms['master1'], automem_scope, rdn=None).list():
+        for user in nsAdminGroups(topo_m4.ms['supplier1'], automem_scope, rdn=None).list():
             user.delete()
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
         # re-adding
         for number in range(3000):
             create_entry(topo_m4, f'automemusrs{number}', automem_scope,
                          '5995', '5693', 'OnDeputation')
 
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for grp, instance in [('Contractors', topo_m4.ms['master3']),
-                              ('Managers', topo_m4.ms['master1']),
-                              ('Interns', topo_m4.ms['master2']),
-                              ('Visitors', topo_m4.ms['master4'])]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for grp, instance in [('Contractors', topo_m4.ms['supplier3']),
+                              ('Managers', topo_m4.ms['supplier1']),
+                              ('Interns', topo_m4.ms['supplier2']),
+                              ('Visitors', topo_m4.ms['supplier4'])]:
             assert len(nsAdminGroup(
                 instance, f'cn={grp},{grp_container}').get_attr_vals_utf8('member')) == 3000
 
-        for instance, grp in [(topo_m4.ms['master2'], default_group1),
-                              (topo_m4.ms['master3'], default_group2)]:
+        for instance, grp in [(topo_m4.ms['supplier2'], default_group1),
+                              (topo_m4.ms['supplier3'], default_group2)]:
             assert not nsAdminGroup(instance, grp).get_attr_vals_utf8('member')
     finally:
         delete_users_and_wait(topo_m4, automem_scope)
@@ -588,12 +588,12 @@ def test_bulk_users_to_non_automemscope(topo_m4, _create_entries):
     Adding bulk users to non-automem_scope and then running modrdn
     operation to change the ou to automem_scope
     :id: c532dc0c-be57-11e9-bcca-8c16451d917b
-    :setup: Instance with 4 masters
+    :setup: Instance with 4 suppliers
     :steps:
         1. Running modrdn operation to change the ou to automem_scope
-        2. Add 3000 user entries to non-automem_scope at topo_m4.ms['master1']
+        2. Add 3000 user entries to non-automem_scope at topo_m4.ms['supplier1']
         3. Run AutomemberRebuildMembershipTask
-        4. Check the same created in rest masters
+        4. Check the same created in rest suppliers
     :expected results:
         1. Pass
         2. Pass
@@ -604,49 +604,49 @@ def test_bulk_users_to_non_automemscope(topo_m4, _create_entries):
     grp_container = "cn=replsubGroups,{}".format(DEFAULT_SUFFIX)
     default_group1 = "cn=SubDef3,{}".format(DEFAULT_SUFFIX)
     default_group2 = "cn=SubDef5,{}".format(DEFAULT_SUFFIX)
-    nsContainers(topo_m4.ms['master1'], DEFAULT_SUFFIX).create(properties={'cn': 'ChangeThisCN'})
-    Group(topo_m4.ms['master1'],
+    nsContainers(topo_m4.ms['supplier1'], DEFAULT_SUFFIX).create(properties={'cn': 'ChangeThisCN'})
+    Group(topo_m4.ms['supplier1'],
           f'cn=replsubGroups,cn=autoMembersPlugin,{DEFAULT_SUFFIX}').replace('autoMemberScope',
                                                                              automem_scope)
-    for instance in [topo_m4.ms['master1'], topo_m4.ms['master2'],
-                     topo_m4.ms['master3'], topo_m4.ms['master4']]:
+    for instance in [topo_m4.ms['supplier1'], topo_m4.ms['supplier2'],
+                     topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
         instance.restart()
     # Adding BulkUsers
     for number in range(3000):
         create_entry(topo_m4, f'automemusrs{number}', f'cn=ChangeThisCN,{DEFAULT_SUFFIX}',
                      '5995', '5693', 'Supervisor')
     try:
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for instance, grp in [(topo_m4.ms['master2'], default_group1),
-                              (topo_m4.ms['master1'], "cn=Managers,{}".format(grp_container))]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for instance, grp in [(topo_m4.ms['supplier2'], default_group1),
+                              (topo_m4.ms['supplier1'], "cn=Managers,{}".format(grp_container))]:
             assert not nsAdminGroup(instance, grp).get_attr_vals_utf8('member')
         # Deleting BulkUsers "User_Name" Suffix "Nof_Users"
-        topo_m4.ms['master3'].rename_s(f"CN=ChangeThisCN,{DEFAULT_SUFFIX}",
+        topo_m4.ms['supplier3'].rename_s(f"CN=ChangeThisCN,{DEFAULT_SUFFIX}",
                                        f'cn=EmployeesNew', newsuperior=DEFAULT_SUFFIX, delold=1)
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        AutomemberRebuildMembershipTask(topo_m4.ms['master1']).create(properties={
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        AutomemberRebuildMembershipTask(topo_m4.ms['supplier1']).create(properties={
             'basedn': automem_scope,
             'filter': "objectClass=posixAccount"
         })
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for instance, grp in [(topo_m4.ms['master1'], 'Managers'),
-                              (topo_m4.ms['master2'], 'Interns'),
-                              (topo_m4.ms['master3'], 'Contractors'),
-                              (topo_m4.ms['master4'], 'Visitors')]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for instance, grp in [(topo_m4.ms['supplier1'], 'Managers'),
+                              (topo_m4.ms['supplier2'], 'Interns'),
+                              (topo_m4.ms['supplier3'], 'Contractors'),
+                              (topo_m4.ms['supplier4'], 'Visitors')]:
             assert len(nsAdminGroup(
                 instance, f'cn={grp},{grp_container}').get_attr_vals_utf8('member')) == 3000
-        for grp, instance in [(default_group1, topo_m4.ms['master2']),
-                              (default_group2, topo_m4.ms['master3'])]:
+        for grp, instance in [(default_group1, topo_m4.ms['supplier2']),
+                              (default_group2, topo_m4.ms['supplier3'])]:
             assert not nsAdminGroup(instance, grp).get_attr_vals_utf8('member')
     finally:
         delete_users_and_wait(topo_m4, automem_scope)
-        nsContainer(topo_m4.ms['master1'], "CN=EmployeesNew,{}".format(DEFAULT_SUFFIX)).delete()
+        nsContainer(topo_m4.ms['supplier1'], "CN=EmployeesNew,{}".format(DEFAULT_SUFFIX)).delete()
 
 
 def test_automemscope_and_running_modrdn(topo_m4, _create_entries):
@@ -654,12 +654,12 @@ def test_automemscope_and_running_modrdn(topo_m4, _create_entries):
     Adding bulk users to non-automem_scope and running modrdn operation
     with new superior to automem_scope
     :id: bf60f958-be57-11e9-945d-8c16451d917b
-    :setup: Instance with 4 masters
+    :setup: Instance with 4 suppliers
     :steps:
         1. Running modrdn operation to change the ou to automem_scope
-        2. Add 3000 user entries to non-automem_scope at topo_m4.ms['master1']
+        2. Add 3000 user entries to non-automem_scope at topo_m4.ms['supplier1']
         3. Run AutomemberRebuildMembershipTask
-        4. Check the same created in rest masters
+        4. Check the same created in rest suppliers
     :expected results:
         1. Pass
         2. Pass
@@ -672,13 +672,13 @@ def test_automemscope_and_running_modrdn(topo_m4, _create_entries):
     grp_container = "cn=replsubGroups,{}".format(DEFAULT_SUFFIX)
     default_group1 = "cn=SubDef3,{}".format(DEFAULT_SUFFIX)
     default_group2 = "cn=SubDef5,{}".format(DEFAULT_SUFFIX)
-    OrganizationalUnits(topo_m4.ms['master1'],
+    OrganizationalUnits(topo_m4.ms['supplier1'],
                         DEFAULT_SUFFIX).create(properties={'ou': 'NewEmployees'})
-    Group(topo_m4.ms['master1'],
+    Group(topo_m4.ms['supplier1'],
           f'cn=replsubGroups,cn=autoMembersPlugin,{DEFAULT_SUFFIX}').replace('autoMemberScope',
                                                                              automem_scope2)
-    for instance in [topo_m4.ms['master1'], topo_m4.ms['master2'],
-                     topo_m4.ms['master3'], topo_m4.ms['master4']]:
+    for instance in [topo_m4.ms['supplier1'], topo_m4.ms['supplier2'],
+                     topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
         Config(instance).replace('nsslapd-errorlog-level', '73728')
         instance.restart()
     # Adding bulk users
@@ -686,36 +686,36 @@ def test_automemscope_and_running_modrdn(topo_m4, _create_entries):
         create_entry(topo_m4, f'automemusrs{number}', automem_scope1,
                      '3994', '5695', 'OnDeputation')
     try:
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for grp, instance in [(default_group2, topo_m4.ms['master3']),
-                              ("cn=Managers,{}".format(grp_container), topo_m4.ms['master1']),
-                              ("cn=Contractors,{}".format(grp_container), topo_m4.ms['master3'])]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for grp, instance in [(default_group2, topo_m4.ms['supplier3']),
+                              ("cn=Managers,{}".format(grp_container), topo_m4.ms['supplier1']),
+                              ("cn=Contractors,{}".format(grp_container), topo_m4.ms['supplier3'])]:
             assert not nsAdminGroup(instance, grp).get_attr_vals_utf8('member')
         count = 0
-        for user in nsAdminGroups(topo_m4.ms['master3'], automem_scope1, rdn=None).list():
-            topo_m4.ms['master1'].rename_s(user.dn,
+        for user in nsAdminGroups(topo_m4.ms['supplier3'], automem_scope1, rdn=None).list():
+            topo_m4.ms['supplier1'].rename_s(user.dn,
                                            f'cn=New{user_rdn}{count}',
                                            newsuperior=automem_scope2, delold=1)
             count += 1
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        AutomemberRebuildMembershipTask(topo_m4.ms['master1']).create(properties={
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        AutomemberRebuildMembershipTask(topo_m4.ms['supplier1']).create(properties={
             'basedn': automem_scope2,
             'filter': "objectClass=posixAccount"
         })
-        for master in [topo_m4.ms['master2'], topo_m4.ms['master3'], topo_m4.ms['master4']]:
-            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['master1'],
-                                                                    master, timeout=30000)
-        for instance, grp in [(topo_m4.ms['master3'], default_group2),
-                              (topo_m4.ms['master3'], default_group1)]:
+        for supplier in [topo_m4.ms['supplier2'], topo_m4.ms['supplier3'], topo_m4.ms['supplier4']]:
+            ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(topo_m4.ms['supplier1'],
+                                                                    supplier, timeout=30000)
+        for instance, grp in [(topo_m4.ms['supplier3'], default_group2),
+                              (topo_m4.ms['supplier3'], default_group1)]:
             assert len(nsAdminGroup(instance, grp).get_attr_vals_utf8('member')) == 3000
-        for instance, grp in [(topo_m4.ms['master1'], 'Managers'),
-                              (topo_m4.ms['master3'], 'Contractors'),
-                              (topo_m4.ms['master2'], 'Interns'),
-                              (topo_m4.ms['master4'], 'Visitors')]:
+        for instance, grp in [(topo_m4.ms['supplier1'], 'Managers'),
+                              (topo_m4.ms['supplier3'], 'Contractors'),
+                              (topo_m4.ms['supplier2'], 'Interns'),
+                              (topo_m4.ms['supplier4'], 'Visitors')]:
             assert not nsAdminGroup(
                 instance, "cn={},{}".format(grp, grp_container)).get_attr_vals_utf8('member')
     finally:

+ 1 - 1
dirsrvtests/tests/stress/README

@@ -8,6 +8,6 @@ A generic high load, long running tests
 reliab7_5_test.py
 ------------------------------
 
-This script is a light-weight version of the legacy TET stress test called "Reliabilty 15".  This test consists of two MMR Masters, and a 5000 entry database.  The test starts off with two threads doing unindexed searchesi(1 for each master).  These do not exit untl the entire test completes.  Then while the unindexed searches are going on, the test performs a set of adds, mods, deletes, and modrdns on each master at the same time.  It performs this set of operations 1000 times.  The main goal of this script is to test stablilty, replication convergence, and memory growth/fragmentation.
+This script is a light-weight version of the legacy TET stress test called "Reliabilty 15".  This test consists of two MMR Suppliers, and a 5000 entry database.  The test starts off with two threads doing unindexed searchesi(1 for each supplier).  These do not exit untl the entire test completes.  Then while the unindexed searches are going on, the test performs a set of adds, mods, deletes, and modrdns on each supplier at the same time.  It performs this set of operations 1000 times.  The main goal of this script is to test stablilty, replication convergence, and memory growth/fragmentation.
 
 Known issue: the server can deadlock in the libdb4 code while performing modrdns(under investigation via https://fedorahosted.org/389/ticket/48166)

+ 132 - 132
dirsrvtests/tests/stress/reliabilty/reliab_7_5_test.py

@@ -41,11 +41,11 @@ RUNNING = True
 DEBUGGING = os.getenv('DEBUGGING', default=False)
 
 class TopologyReplication(object):
-    def __init__(self, master1, master2):
-        master1.open()
-        self.master1 = master1
-        master2.open()
-        self.master2 = master2
+    def __init__(self, supplier1, supplier2):
+        supplier1.open()
+        self.supplier1 = supplier1
+        supplier2.open()
+        self.supplier2 = supplier2
 
 
 @pytest.fixture(scope="module")
@@ -54,68 +54,68 @@ def topology(request):
     if installation1_prefix:
         args_instance[SER_DEPLOYED_DIR] = installation1_prefix
 
-    # Creating master 1...
-    master1 = DirSrv(verbose=DEBUGGING)
-    args_instance[SER_HOST] = HOST_MASTER_1
-    args_instance[SER_PORT] = PORT_MASTER_1
-    args_instance[SER_SECURE_PORT] = SECUREPORT_MASTER_1
-    args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_1
+    # Creating supplier 1...
+    supplier1 = DirSrv(verbose=DEBUGGING)
+    args_instance[SER_HOST] = HOST_SUPPLIER_1
+    args_instance[SER_PORT] = PORT_SUPPLIER_1
+    args_instance[SER_SECURE_PORT] = SECUREPORT_SUPPLIER_1
+    args_instance[SER_SERVERID_PROP] = SERVERID_SUPPLIER_1
     args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX
-    args_master = args_instance.copy()
-    master1.allocate(args_master)
-    instance_master1 = master1.exists()
-    if instance_master1:
-        master1.delete()
-    master1.create()
-    master1.open()
-    master1.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.MASTER,
-                                      replicaId=REPLICAID_MASTER_1)
-
-    # Creating master 2...
-    master2 = DirSrv(verbose=DEBUGGING)
-    args_instance[SER_HOST] = HOST_MASTER_2
-    args_instance[SER_PORT] = PORT_MASTER_2
-    args_instance[SER_SECURE_PORT] = SECUREPORT_MASTER_2
-    args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_2
+    args_supplier = args_instance.copy()
+    supplier1.allocate(args_supplier)
+    instance_supplier1 = supplier1.exists()
+    if instance_supplier1:
+        supplier1.delete()
+    supplier1.create()
+    supplier1.open()
+    supplier1.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.SUPPLIER,
+                                      replicaId=REPLICAID_SUPPLIER_1)
+
+    # Creating supplier 2...
+    supplier2 = DirSrv(verbose=DEBUGGING)
+    args_instance[SER_HOST] = HOST_SUPPLIER_2
+    args_instance[SER_PORT] = PORT_SUPPLIER_2
+    args_instance[SER_SECURE_PORT] = SECUREPORT_SUPPLIER_2
+    args_instance[SER_SERVERID_PROP] = SERVERID_SUPPLIER_2
     args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX
-    args_master = args_instance.copy()
-    master2.allocate(args_master)
-    instance_master2 = master2.exists()
-    if instance_master2:
-        master2.delete()
-    master2.create()
-    master2.open()
-    master2.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.MASTER,
-                                      replicaId=REPLICAID_MASTER_2)
+    args_supplier = args_instance.copy()
+    supplier2.allocate(args_supplier)
+    instance_supplier2 = supplier2.exists()
+    if instance_supplier2:
+        supplier2.delete()
+    supplier2.create()
+    supplier2.open()
+    supplier2.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.SUPPLIER,
+                                      replicaId=REPLICAID_SUPPLIER_2)
 
     #
     # Create all the agreements
     #
-    # Creating agreement from master 1 to master 2
+    # Creating agreement from supplier 1 to supplier 2
     properties = {RA_NAME: r'meTo_$host:$port',
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m1_m2_agmt = master1.agreement.create(suffix=SUFFIX, host=master2.host,
-                                          port=master2.port,
+    m1_m2_agmt = supplier1.agreement.create(suffix=SUFFIX, host=supplier2.host,
+                                          port=supplier2.port,
                                           properties=properties)
     if not m1_m2_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m1_m2_agmt)
 
-    # Creating agreement from master 2 to master 1
+    # Creating agreement from supplier 2 to supplier 1
     properties = {RA_NAME: r'meTo_$host:$port',
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m2_m1_agmt = master2.agreement.create(suffix=SUFFIX, host=master1.host,
-                                          port=master1.port,
+    m2_m1_agmt = supplier2.agreement.create(suffix=SUFFIX, host=supplier1.host,
+                                          port=supplier1.port,
                                           properties=properties)
     if not m2_m1_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m2_m1_agmt)
 
@@ -123,9 +123,9 @@ def topology(request):
     time.sleep(5)
 
     #
-    # Import tests entries into master1 before we initialize master2
+    # Import tests entries into supplier1 before we initialize supplier2
     #
-    ldif_dir = master1.get_ldif_dir()
+    ldif_dir = supplier1.get_ldif_dir()
 
     import_ldif = ldif_dir + '/rel7.5-entries.ldif'
 
@@ -148,31 +148,31 @@ def topology(request):
     idx = 0
     while idx < NUM_USERS:
         count = str(idx)
-        ldif.write('dn: uid=master1_entry' + count + ',' +
+        ldif.write('dn: uid=supplier1_entry' + count + ',' +
                    DEFAULT_SUFFIX + '\n')
         ldif.write('objectclass: top\n')
         ldif.write('objectclass: person\n')
         ldif.write('objectclass: inetorgperson\n')
         ldif.write('objectclass: organizationalperson\n')
-        ldif.write('uid: master1_entry' + count + '\n')
-        ldif.write('cn: master1 entry' + count + '\n')
-        ldif.write('givenname: master1 ' + count + '\n')
+        ldif.write('uid: supplier1_entry' + count + '\n')
+        ldif.write('cn: supplier1 entry' + count + '\n')
+        ldif.write('givenname: supplier1 ' + count + '\n')
         ldif.write('sn: entry ' + count + '\n')
-        ldif.write('userpassword: master1_entry' + count + '\n')
+        ldif.write('userpassword: supplier1_entry' + count + '\n')
         ldif.write('description: ' + 'a' * random.randint(1, 1000) + '\n')
         ldif.write('\n')
 
-        ldif.write('dn: uid=master2_entry' + count + ',' +
+        ldif.write('dn: uid=supplier2_entry' + count + ',' +
                    DEFAULT_SUFFIX + '\n')
         ldif.write('objectclass: top\n')
         ldif.write('objectclass: person\n')
         ldif.write('objectclass: inetorgperson\n')
         ldif.write('objectclass: organizationalperson\n')
-        ldif.write('uid: master2_entry' + count + '\n')
-        ldif.write('cn: master2 entry' + count + '\n')
-        ldif.write('givenname: master2 ' + count + '\n')
+        ldif.write('uid: supplier2_entry' + count + '\n')
+        ldif.write('cn: supplier2 entry' + count + '\n')
+        ldif.write('givenname: supplier2 ' + count + '\n')
         ldif.write('sn: entry ' + count + '\n')
-        ldif.write('userpassword: master2_entry' + count + '\n')
+        ldif.write('userpassword: supplier2_entry' + count + '\n')
         ldif.write('description: ' + 'a' * random.randint(1, 1000) + '\n')
         ldif.write('\n')
         idx += 1
@@ -181,7 +181,7 @@ def topology(request):
 
     # Now import it
     try:
-        master1.tasks.importLDIF(suffix=DEFAULT_SUFFIX, input_file=import_ldif,
+        supplier1.tasks.importLDIF(suffix=DEFAULT_SUFFIX, input_file=import_ldif,
                                  args={TASK_WAIT: True})
     except ValueError:
         log.fatal('test_reliab_7.5: Online import failed')
@@ -190,42 +190,42 @@ def topology(request):
     #
     # Initialize all the agreements
     #
-    master1.agreement.init(SUFFIX, HOST_MASTER_2, PORT_MASTER_2)
-    master1.waitForReplInit(m1_m2_agmt)
+    supplier1.agreement.init(SUFFIX, HOST_SUPPLIER_2, PORT_SUPPLIER_2)
+    supplier1.waitForReplInit(m1_m2_agmt)
 
     # Check replication is working...
-    if master1.testReplication(DEFAULT_SUFFIX, master2):
+    if supplier1.testReplication(DEFAULT_SUFFIX, supplier2):
         log.info('Replication is working.')
     else:
         log.fatal('Replication is not working.')
         assert False
 
     # Clear out the tmp dir
-    master1.clearTmpDir(__file__)
+    supplier1.clearTmpDir(__file__)
 
     # Delete each instance in the end
     def fin():
-        master1.delete()
-        master2.delete()
+        supplier1.delete()
+        supplier2.delete()
         if ENABLE_VALGRIND:
-            sbin_dir = get_sbin_dir(prefix=master1.prefix)
+            sbin_dir = get_sbin_dir(prefix=supplier1.prefix)
             valgrind_disable(sbin_dir)
     request.addfinalizer(fin)
 
-    return TopologyReplication(master1, master2)
+    return TopologyReplication(supplier1, supplier2)
 
 
 class AddDelUsers(threading.Thread):
-    def __init__(self, inst, masterid):
+    def __init__(self, inst, supplierid):
         threading.Thread.__init__(self)
         self.daemon = True
         self.inst = inst
-        self.id = masterid
+        self.id = supplierid
 
     def run(self):
         # Add 5000 entries
         idx = 0
-        RDN = 'uid=add_del_master_' + self.id + '-'
+        RDN = 'uid=add_del_supplier_' + self.id + '-'
 
         conn = DirectoryManager(self.inst).bind()
 
@@ -238,7 +238,7 @@ class AddDelUsers(threading.Thread):
                                             'cn': 'g' * random.randint(1, 500)
                                             })))
             except ldap.LDAPError as e:
-                log.fatal('Add users to master ' + self.id + ' failed (' +
+                log.fatal('Add users to supplier ' + self.id + ' failed (' +
                           USER_DN + ') error: ' + e.message['desc'])
             idx += 1
         conn.close()
@@ -251,7 +251,7 @@ class AddDelUsers(threading.Thread):
             try:
                 conn.delete_s(USER_DN)
             except ldap.LDAPError as e:
-                log.fatal('Failed to delete (' + USER_DN + ') on master ' +
+                log.fatal('Failed to delete (' + USER_DN + ') on supplier ' +
                           self.id + ': error ' + e.message['desc'])
             idx += 1
         conn.close()
@@ -259,25 +259,25 @@ class AddDelUsers(threading.Thread):
 
 class ModUsers(threading.Thread):
     # Do mods and modrdns
-    def __init__(self, inst, masterid):
+    def __init__(self, inst, supplierid):
         threading.Thread.__init__(self)
         self.daemon = True
         self.inst = inst
-        self.id = masterid
+        self.id = supplierid
 
     def run(self):
         # Mod existing entries
         conn = DirectoryManager(self.inst).bind()
         idx = 0
         while idx < NUM_USERS:
-            USER_DN = ('uid=master' + self.id + '_entry' + str(idx) + ',' +
+            USER_DN = ('uid=supplier' + self.id + '_entry' + str(idx) + ',' +
                        DEFAULT_SUFFIX)
             try:
                 conn.modify(USER_DN, [(ldap.MOD_REPLACE,
                                        'givenname',
-                                       'new givenname master1-' + str(idx))])
+                                       'new givenname supplier1-' + str(idx))])
             except ldap.LDAPError as e:
-                log.fatal('Failed to modify (' + USER_DN + ') on master ' +
+                log.fatal('Failed to modify (' + USER_DN + ') on supplier ' +
                           self.id + ': error ' + e.message['desc'])
             idx += 1
         conn.close()
@@ -286,13 +286,13 @@ class ModUsers(threading.Thread):
         conn = DirectoryManager(self.inst).bind()
         idx = 0
         while idx < NUM_USERS:
-            USER_DN = ('uid=master' + self.id + '_entry' + str(idx) + ',' +
+            USER_DN = ('uid=supplier' + self.id + '_entry' + str(idx) + ',' +
                        DEFAULT_SUFFIX)
-            NEW_RDN = 'cn=master' + self.id + '_entry' + str(idx)
+            NEW_RDN = 'cn=supplier' + self.id + '_entry' + str(idx)
             try:
                 conn.rename_s(USER_DN, NEW_RDN, delold=1)
             except ldap.LDAPError as e:
-                log.error('Failed to modrdn (' + USER_DN + ') on master ' +
+                log.error('Failed to modrdn (' + USER_DN + ') on supplier ' +
                           self.id + ': error ' + e.message['desc'])
             idx += 1
         conn.close()
@@ -301,33 +301,33 @@ class ModUsers(threading.Thread):
         conn = DirectoryManager(self.inst).bind()
         idx = 0
         while idx < NUM_USERS:
-            USER_DN = ('cn=master' + self.id + '_entry' + str(idx) + ',' +
+            USER_DN = ('cn=supplier' + self.id + '_entry' + str(idx) + ',' +
                        DEFAULT_SUFFIX)
-            NEW_RDN = 'uid=master' + self.id + '_entry' + str(idx)
+            NEW_RDN = 'uid=supplier' + self.id + '_entry' + str(idx)
             try:
                 conn.rename_s(USER_DN, NEW_RDN, delold=1)
             except ldap.LDAPError as e:
-                log.error('Failed to modrdn (' + USER_DN + ') on master ' +
+                log.error('Failed to modrdn (' + USER_DN + ') on supplier ' +
                           self.id + ': error ' + e.message['desc'])
             idx += 1
         conn.close()
 
 
 class DoSearches(threading.Thread):
-    # Search a master
-    def __init__(self, inst, masterid):
+    # Search a supplier
+    def __init__(self, inst, supplierid):
         threading.Thread.__init__(self)
         self.daemon = True
         self.inst = inst
-        self.id = masterid
+        self.id = supplierid
 
     def run(self):
         # Equality
         conn = DirectoryManager(self.inst).bind()
         idx = 0
         while idx < NUM_USERS:
-            search_filter = ('(|(uid=master' + self.id + '_entry' + str(idx) +
-                             ')(cn=master' + self.id + '_entry' + str(idx) +
+            search_filter = ('(|(uid=supplier' + self.id + '_entry' + str(idx) +
+                             ')(cn=supplier' + self.id + '_entry' + str(idx) +
                              '))')
             try:
                 conn.search(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, search_filter)
@@ -344,8 +344,8 @@ class DoSearches(threading.Thread):
         conn = DirectoryManager(self.inst).bind()
         idx = 0
         while idx < NUM_USERS:
-            search_filter = ('(|(uid=master' + self.id + '_entry' + str(idx) +
-                             '*)(cn=master' + self.id + '_entry' + str(idx) +
+            search_filter = ('(|(uid=supplier' + self.id + '_entry' + str(idx) +
+                             '*)(cn=supplier' + self.id + '_entry' + str(idx) +
                              '*))')
             try:
                 conn.search(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, search_filter)
@@ -360,7 +360,7 @@ class DoSearches(threading.Thread):
 
 
 class DoFullSearches(threading.Thread):
-    # Search a master
+    # Search a supplier
     def __init__(self, inst):
         threading.Thread.__init__(self)
         self.daemon = True
@@ -393,9 +393,9 @@ def test_reliab7_5_init(topology):
 
     BACKEND_DN = 'cn=userroot,cn=ldbm database,cn=plugins,cn=config'
 
-    # Update master 1
+    # Update supplier 1
     try:
-        topology.master1.modify_s(BACKEND_DN, [(ldap.MOD_REPLACE,
+        topology.supplier1.modify_s(BACKEND_DN, [(ldap.MOD_REPLACE,
                                                 'nsslapd-cachememsize',
                                                 '512000'),
                                                (ldap.MOD_REPLACE,
@@ -405,9 +405,9 @@ def test_reliab7_5_init(topology):
         log.fatal('Failed to set cache settings: error ' + e.message['desc'])
         assert False
 
-    # Update master 2
+    # Update supplier 2
     try:
-        topology.master2.modify_s(BACKEND_DN, [(ldap.MOD_REPLACE,
+        topology.supplier2.modify_s(BACKEND_DN, [(ldap.MOD_REPLACE,
                                                 'nsslapd-cachememsize',
                                                 '512000'),
                                                (ldap.MOD_REPLACE,
@@ -417,17 +417,17 @@ def test_reliab7_5_init(topology):
         log.fatal('Failed to set cache settings: error ' + e.message['desc'])
         assert False
 
-    # Restart the masters to pick up the new cache settings
-    topology.master1.stop(timeout=10)
-    topology.master2.stop(timeout=10)
+    # Restart the suppliers to pick up the new cache settings
+    topology.supplier1.stop(timeout=10)
+    topology.supplier2.stop(timeout=10)
 
     # This is the time to enable valgrind (if enabled)
     if ENABLE_VALGRIND:
-        sbin_dir = get_sbin_dir(prefix=topology.master1.prefix)
+        sbin_dir = get_sbin_dir(prefix=topology.supplier1.prefix)
         valgrind_enable(sbin_dir)
 
-    topology.master1.start(timeout=30)
-    topology.master2.start(timeout=30)
+    topology.supplier1.start(timeout=30)
+    topology.supplier2.start(timeout=30)
 
 
 def test_reliab7_5_run(topology):
@@ -439,34 +439,34 @@ def test_reliab7_5_run(topology):
     RUNNING = True
 
     # Start some searches to run through the entire stress test
-    fullSearch1 = DoFullSearches(topology.master1)
+    fullSearch1 = DoFullSearches(topology.supplier1)
     fullSearch1.start()
-    fullSearch2 = DoFullSearches(topology.master2)
+    fullSearch2 = DoFullSearches(topology.supplier2)
     fullSearch2.start()
 
     while count <= MAX_PASSES:
         log.info('################## Reliabilty 7.5 Pass: %d' % count)
 
-        # Master 1
-        add_del_users1 = AddDelUsers(topology.master1, '1')
+        # Supplier 1
+        add_del_users1 = AddDelUsers(topology.supplier1, '1')
         add_del_users1.start()
-        mod_users1 = ModUsers(topology.master1, '1')
+        mod_users1 = ModUsers(topology.supplier1, '1')
         mod_users1.start()
-        search1 = DoSearches(topology.master1, '1')
+        search1 = DoSearches(topology.supplier1, '1')
         search1.start()
 
-        # Master 2
-        add_del_users2 = AddDelUsers(topology.master2, '2')
+        # Supplier 2
+        add_del_users2 = AddDelUsers(topology.supplier2, '2')
         add_del_users2.start()
-        mod_users2 = ModUsers(topology.master2, '2')
+        mod_users2 = ModUsers(topology.supplier2, '2')
         mod_users2.start()
-        search2 = DoSearches(topology.master2, '2')
+        search2 = DoSearches(topology.supplier2, '2')
         search2.start()
 
-        # Search the masters
-        search3 = DoSearches(topology.master1, '1')
+        # Search the suppliers
+        search3 = DoSearches(topology.supplier1, '1')
         search3.start()
-        search4 = DoSearches(topology.master2, '2')
+        search4 = DoSearches(topology.supplier2, '2')
         search4.start()
 
         # Wait for threads to finish
@@ -491,77 +491,77 @@ def test_reliab7_5_run(topology):
     # Wait for replication to converge
     #
     if CHECK_CONVERGENCE:
-        # Add an entry to each master, and wait for it to replicate
-        MASTER1_DN = 'uid=rel7.5-master1,' + DEFAULT_SUFFIX
-        MASTER2_DN = 'uid=rel7.5-master2,' + DEFAULT_SUFFIX
+        # Add an entry to each supplier, and wait for it to replicate
+        SUPPLIER1_DN = 'uid=rel7.5-supplier1,' + DEFAULT_SUFFIX
+        SUPPLIER2_DN = 'uid=rel7.5-supplier2,' + DEFAULT_SUFFIX
 
-        # Master 1
+        # Supplier 1
         try:
-            topology.master1.add_s(Entry((MASTER1_DN, {'objectclass':
+            topology.supplier1.add_s(Entry((SUPPLIER1_DN, {'objectclass':
                                                        ['top',
                                                         'extensibleObject'],
                                                        'sn': '1',
                                                        'cn': 'user 1',
-                                                       'uid': 'rel7.5-master1',
+                                                       'uid': 'rel7.5-supplier1',
                                                        'userpassword':
                                                        PASSWORD})))
         except ldap.LDAPError as e:
-            log.fatal('Failed to add replication test entry ' + MASTER1_DN +
+            log.fatal('Failed to add replication test entry ' + SUPPLIER1_DN +
                       ': error ' + e.message['desc'])
             assert False
 
-        log.info('################## Waiting for master 2 to converge...')
+        log.info('################## Waiting for supplier 2 to converge...')
 
         while True:
             entry = None
             try:
-                entry = topology.master2.search_s(MASTER1_DN,
+                entry = topology.supplier2.search_s(SUPPLIER1_DN,
                                                   ldap.SCOPE_BASE,
                                                   'objectclass=*')
             except ldap.NO_SUCH_OBJECT:
                 pass
             except ldap.LDAPError as e:
                 log.fatal('Search Users: Search failed (%s): %s' %
-                          (MASTER1_DN, e.message['desc']))
+                          (SUPPLIER1_DN, e.message['desc']))
                 assert False
             if entry:
                 break
             time.sleep(5)
 
-        log.info('################## Master 2 converged.')
+        log.info('################## Supplier 2 converged.')
 
-        # Master 2
+        # Supplier 2
         try:
-            topology.master2.add_s(
-                Entry((MASTER2_DN, {'objectclass': ['top',
+            topology.supplier2.add_s(
+                Entry((SUPPLIER2_DN, {'objectclass': ['top',
                                                     'extensibleObject'],
                                     'sn': '1',
                                     'cn': 'user 1',
-                                    'uid': 'rel7.5-master2',
+                                    'uid': 'rel7.5-supplier2',
                                     'userpassword': PASSWORD})))
         except ldap.LDAPError as e:
-            log.fatal('Failed to add replication test entry ' + MASTER1_DN +
+            log.fatal('Failed to add replication test entry ' + SUPPLIER1_DN +
                       ': error ' + e.message['desc'])
             assert False
 
-        log.info('################## Waiting for master 1 to converge...')
+        log.info('################## Waiting for supplier 1 to converge...')
         while True:
             entry = None
             try:
-                entry = topology.master1.search_s(MASTER2_DN,
+                entry = topology.supplier1.search_s(SUPPLIER2_DN,
                                                   ldap.SCOPE_BASE,
                                                   'objectclass=*')
             except ldap.NO_SUCH_OBJECT:
                 pass
             except ldap.LDAPError as e:
                 log.fatal('Search Users: Search failed (%s): %s' %
-                          (MASTER2_DN, e.message['desc']))
+                          (SUPPLIER2_DN, e.message['desc']))
                 assert False
             if entry:
                 break
             time.sleep(5)
 
-        log.info('################## Master 1 converged.')
+        log.info('################## Supplier 1 converged.')
 
     # Stop the full searches
     RUNNING = False

+ 208 - 208
dirsrvtests/tests/stress/replication/mmr_01_4m-2h-4c_test.py

@@ -22,31 +22,31 @@ ADD_DEL_COUNT = 5000
 MAX_LOOPS = 5
 TEST_CONVERGE_LATENCY = True
 CONVERGENCE_TIMEOUT = '60'
-master_list = []
+supplier_list = []
 hub_list = []
 con_list = []
 TEST_START = time.time()
 
 LAST_DN_IDX = ADD_DEL_COUNT - 1
-LAST_DN_M1 = 'DEL dn="uid=master_1-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
-LAST_DN_M2 = 'DEL dn="uid=master_2-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
-LAST_DN_M3 = 'DEL dn="uid=master_3-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
-LAST_DN_M4 = 'DEL dn="uid=master_4-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
+LAST_DN_M1 = 'DEL dn="uid=supplier_1-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
+LAST_DN_M2 = 'DEL dn="uid=supplier_2-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
+LAST_DN_M3 = 'DEL dn="uid=supplier_3-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
+LAST_DN_M4 = 'DEL dn="uid=supplier_4-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
 
 
 class TopologyReplication(object):
     """The Replication Topology Class"""
-    def __init__(self, master1, master2, master3, master4, hub1, hub2,
+    def __init__(self, supplier1, supplier2, supplier3, supplier4, hub1, hub2,
                  consumer1, consumer2, consumer3, consumer4):
         """Init"""
-        master1.open()
-        self.master1 = master1
-        master2.open()
-        self.master2 = master2
-        master3.open()
-        self.master3 = master3
-        master4.open()
-        self.master4 = master4
+        supplier1.open()
+        self.supplier1 = supplier1
+        supplier2.open()
+        self.supplier2 = supplier2
+        supplier3.open()
+        self.supplier3 = supplier3
+        supplier4.open()
+        self.supplier4 = supplier4
         hub1.open()
         self.hub1 = hub1
         hub2.open()
@@ -59,10 +59,10 @@ class TopologyReplication(object):
         self.consumer3 = consumer3
         consumer4.open()
         self.consumer4 = consumer4
-        master_list.append(master1.serverid)
-        master_list.append(master2.serverid)
-        master_list.append(master3.serverid)
-        master_list.append(master4.serverid)
+        supplier_list.append(supplier1.serverid)
+        supplier_list.append(supplier2.serverid)
+        supplier_list.append(supplier3.serverid)
+        supplier_list.append(supplier4.serverid)
         hub_list.append(hub1.serverid)
         hub_list.append(hub2.serverid)
         con_list.append(consumer1.serverid)
@@ -75,81 +75,81 @@ class TopologyReplication(object):
 def topology(request):
     """Create Replication Deployment"""
 
-    # Creating master 1...
+    # Creating supplier 1...
     if DEBUGGING:
-        master1 = DirSrv(verbose=True)
+        supplier1 = DirSrv(verbose=True)
     else:
-        master1 = DirSrv(verbose=False)
-    args_instance[SER_HOST] = HOST_MASTER_1
-    args_instance[SER_PORT] = PORT_MASTER_1
-    args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_1
+        supplier1 = DirSrv(verbose=False)
+    args_instance[SER_HOST] = HOST_SUPPLIER_1
+    args_instance[SER_PORT] = PORT_SUPPLIER_1
+    args_instance[SER_SERVERID_PROP] = SERVERID_SUPPLIER_1
     args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX
-    args_master = args_instance.copy()
-    master1.allocate(args_master)
-    instance_master1 = master1.exists()
-    if instance_master1:
-        master1.delete()
-    master1.create()
-    master1.open()
-    master1.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.MASTER,
-                                      replicaId=REPLICAID_MASTER_1)
-
-    # Creating master 2...
+    args_supplier = args_instance.copy()
+    supplier1.allocate(args_supplier)
+    instance_supplier1 = supplier1.exists()
+    if instance_supplier1:
+        supplier1.delete()
+    supplier1.create()
+    supplier1.open()
+    supplier1.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.SUPPLIER,
+                                      replicaId=REPLICAID_SUPPLIER_1)
+
+    # Creating supplier 2...
     if DEBUGGING:
-        master2 = DirSrv(verbose=True)
+        supplier2 = DirSrv(verbose=True)
     else:
-        master2 = DirSrv(verbose=False)
-    args_instance[SER_HOST] = HOST_MASTER_2
-    args_instance[SER_PORT] = PORT_MASTER_2
-    args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_2
+        supplier2 = DirSrv(verbose=False)
+    args_instance[SER_HOST] = HOST_SUPPLIER_2
+    args_instance[SER_PORT] = PORT_SUPPLIER_2
+    args_instance[SER_SERVERID_PROP] = SERVERID_SUPPLIER_2
     args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX
-    args_master = args_instance.copy()
-    master2.allocate(args_master)
-    instance_master2 = master2.exists()
-    if instance_master2:
-        master2.delete()
-    master2.create()
-    master2.open()
-    master2.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.MASTER,
-                                      replicaId=REPLICAID_MASTER_2)
-
-    # Creating master 3...
+    args_supplier = args_instance.copy()
+    supplier2.allocate(args_supplier)
+    instance_supplier2 = supplier2.exists()
+    if instance_supplier2:
+        supplier2.delete()
+    supplier2.create()
+    supplier2.open()
+    supplier2.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.SUPPLIER,
+                                      replicaId=REPLICAID_SUPPLIER_2)
+
+    # Creating supplier 3...
     if DEBUGGING:
-        master3 = DirSrv(verbose=True)
+        supplier3 = DirSrv(verbose=True)
     else:
-        master3 = DirSrv(verbose=False)
-    args_instance[SER_HOST] = HOST_MASTER_3
-    args_instance[SER_PORT] = PORT_MASTER_3
-    args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_3
+        supplier3 = DirSrv(verbose=False)
+    args_instance[SER_HOST] = HOST_SUPPLIER_3
+    args_instance[SER_PORT] = PORT_SUPPLIER_3
+    args_instance[SER_SERVERID_PROP] = SERVERID_SUPPLIER_3
     args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX
-    args_master = args_instance.copy()
-    master3.allocate(args_master)
-    instance_master3 = master3.exists()
-    if instance_master3:
-        master3.delete()
-    master3.create()
-    master3.open()
-    master3.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.MASTER,
-                                      replicaId=REPLICAID_MASTER_3)
-
-    # Creating master 4...
+    args_supplier = args_instance.copy()
+    supplier3.allocate(args_supplier)
+    instance_supplier3 = supplier3.exists()
+    if instance_supplier3:
+        supplier3.delete()
+    supplier3.create()
+    supplier3.open()
+    supplier3.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.SUPPLIER,
+                                      replicaId=REPLICAID_SUPPLIER_3)
+
+    # Creating supplier 4...
     if DEBUGGING:
-        master4 = DirSrv(verbose=True)
+        supplier4 = DirSrv(verbose=True)
     else:
-        master4 = DirSrv(verbose=False)
-    args_instance[SER_HOST] = HOST_MASTER_4
-    args_instance[SER_PORT] = PORT_MASTER_4
-    args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_4
+        supplier4 = DirSrv(verbose=False)
+    args_instance[SER_HOST] = HOST_SUPPLIER_4
+    args_instance[SER_PORT] = PORT_SUPPLIER_4
+    args_instance[SER_SERVERID_PROP] = SERVERID_SUPPLIER_4
     args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX
-    args_master = args_instance.copy()
-    master4.allocate(args_master)
-    instance_master4 = master4.exists()
-    if instance_master4:
-        master4.delete()
-    master4.create()
-    master4.open()
-    master4.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.MASTER,
-                                      replicaId=REPLICAID_MASTER_4)
+    args_supplier = args_instance.copy()
+    supplier4.allocate(args_supplier)
+    instance_supplier4 = supplier4.exists()
+    if instance_supplier4:
+        supplier4.delete()
+    supplier4.create()
+    supplier4.open()
+    supplier4.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.SUPPLIER,
+                                      replicaId=REPLICAID_SUPPLIER_4)
 
     # Creating hub 1...
     if DEBUGGING:
@@ -273,283 +273,283 @@ def topology(request):
     # Create all the agreements
     #
 
-    # Creating agreement from master 1 to master 2
-    properties = {RA_NAME: 'meTo_' + master2.host + ':' + str(master2.port),
+    # Creating agreement from supplier 1 to supplier 2
+    properties = {RA_NAME: 'meTo_' + supplier2.host + ':' + str(supplier2.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m1_m2_agmt = master1.agreement.create(suffix=SUFFIX, host=master2.host,
-                                          port=master2.port,
+    m1_m2_agmt = supplier1.agreement.create(suffix=SUFFIX, host=supplier2.host,
+                                          port=supplier2.port,
                                           properties=properties)
     if not m1_m2_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m1_m2_agmt)
 
-    # Creating agreement from master 1 to master 3
-    properties = {RA_NAME: 'meTo_' + master3.host + ':' + str(master3.port),
+    # Creating agreement from supplier 1 to supplier 3
+    properties = {RA_NAME: 'meTo_' + supplier3.host + ':' + str(supplier3.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m1_m3_agmt = master1.agreement.create(suffix=SUFFIX, host=master3.host,
-                                          port=master3.port,
+    m1_m3_agmt = supplier1.agreement.create(suffix=SUFFIX, host=supplier3.host,
+                                          port=supplier3.port,
                                           properties=properties)
     if not m1_m3_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m1_m3_agmt)
 
-    # Creating agreement from master 1 to master 4
-    properties = {RA_NAME: 'meTo_' + master4.host + ':' + str(master4.port),
+    # Creating agreement from supplier 1 to supplier 4
+    properties = {RA_NAME: 'meTo_' + supplier4.host + ':' + str(supplier4.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m1_m4_agmt = master1.agreement.create(suffix=SUFFIX, host=master4.host,
-                                          port=master4.port,
+    m1_m4_agmt = supplier1.agreement.create(suffix=SUFFIX, host=supplier4.host,
+                                          port=supplier4.port,
                                           properties=properties)
     if not m1_m4_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m1_m4_agmt)
 
-    # Creating agreement from master 1 to hub 1
+    # Creating agreement from supplier 1 to hub 1
     properties = {RA_NAME: 'meTo_' + hub1.host + ':' + str(hub1.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m1_h1_agmt = master1.agreement.create(suffix=SUFFIX, host=hub1.host,
+    m1_h1_agmt = supplier1.agreement.create(suffix=SUFFIX, host=hub1.host,
                                           port=hub1.port,
                                           properties=properties)
     if not m1_h1_agmt:
-        log.fatal("Fail to create a master -> hub replica agreement")
+        log.fatal("Fail to create a supplier -> hub replica agreement")
         sys.exit(1)
     log.debug("%s created" % m1_h1_agmt)
 
-    # Creating agreement from master 1 to hub 2
+    # Creating agreement from supplier 1 to hub 2
     properties = {RA_NAME: 'meTo_' + hub2.host + ':' + str(hub2.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m1_h2_agmt = master1.agreement.create(suffix=SUFFIX, host=hub2.host,
+    m1_h2_agmt = supplier1.agreement.create(suffix=SUFFIX, host=hub2.host,
                                           port=hub2.port,
                                           properties=properties)
     if not m1_h2_agmt:
-        log.fatal("Fail to create a master -> hub replica agreement")
+        log.fatal("Fail to create a supplier -> hub replica agreement")
         sys.exit(1)
     log.debug("%s created" % m1_h2_agmt)
 
-    # Creating agreement from master 2 to master 1
-    properties = {RA_NAME: 'meTo_' + master1.host + ':' + str(master1.port),
+    # Creating agreement from supplier 2 to supplier 1
+    properties = {RA_NAME: 'meTo_' + supplier1.host + ':' + str(supplier1.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m2_m1_agmt = master2.agreement.create(suffix=SUFFIX, host=master1.host,
-                                          port=master1.port,
+    m2_m1_agmt = supplier2.agreement.create(suffix=SUFFIX, host=supplier1.host,
+                                          port=supplier1.port,
                                           properties=properties)
     if not m2_m1_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m2_m1_agmt)
 
-    # Creating agreement from master 2 to master 3
-    properties = {RA_NAME: 'meTo_' + master3.host + ':' + str(master3.port),
+    # Creating agreement from supplier 2 to supplier 3
+    properties = {RA_NAME: 'meTo_' + supplier3.host + ':' + str(supplier3.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m2_m3_agmt = master2.agreement.create(suffix=SUFFIX, host=master3.host,
-                                          port=master3.port,
+    m2_m3_agmt = supplier2.agreement.create(suffix=SUFFIX, host=supplier3.host,
+                                          port=supplier3.port,
                                           properties=properties)
     if not m2_m3_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m2_m3_agmt)
 
-    # Creating agreement from master 2 to master 4
-    properties = {RA_NAME: 'meTo_' + master4.host + ':' + str(master4.port),
+    # Creating agreement from supplier 2 to supplier 4
+    properties = {RA_NAME: 'meTo_' + supplier4.host + ':' + str(supplier4.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m2_m4_agmt = master2.agreement.create(suffix=SUFFIX, host=master4.host,
-                                          port=master4.port,
+    m2_m4_agmt = supplier2.agreement.create(suffix=SUFFIX, host=supplier4.host,
+                                          port=supplier4.port,
                                           properties=properties)
     if not m2_m4_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m2_m4_agmt)
 
-    # Creating agreement from master 2 to hub 1
+    # Creating agreement from supplier 2 to hub 1
     properties = {RA_NAME: 'meTo_' + hub1.host + ':' + str(hub1.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m2_h1_agmt = master2.agreement.create(suffix=SUFFIX, host=hub1.host,
+    m2_h1_agmt = supplier2.agreement.create(suffix=SUFFIX, host=hub1.host,
                                           port=hub1.port,
                                           properties=properties)
     if not m2_h1_agmt:
-        log.fatal("Fail to create a master -> hub replica agreement")
+        log.fatal("Fail to create a supplier -> hub replica agreement")
         sys.exit(1)
     log.debug("%s created" % m2_h1_agmt)
 
-    # Creating agreement from master 2 to hub 2
+    # Creating agreement from supplier 2 to hub 2
     properties = {RA_NAME: 'meTo_' + hub2.host + ':' + str(hub2.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m2_h2_agmt = master2.agreement.create(suffix=SUFFIX, host=hub2.host,
+    m2_h2_agmt = supplier2.agreement.create(suffix=SUFFIX, host=hub2.host,
                                           port=hub2.port,
                                           properties=properties)
     if not m2_h2_agmt:
-        log.fatal("Fail to create a master -> hub replica agreement")
+        log.fatal("Fail to create a supplier -> hub replica agreement")
         sys.exit(1)
     log.debug("%s created" % m2_h2_agmt)
 
-    # Creating agreement from master 3 to master 1
-    properties = {RA_NAME: 'meTo_' + master1.host + ':' + str(master1.port),
+    # Creating agreement from supplier 3 to supplier 1
+    properties = {RA_NAME: 'meTo_' + supplier1.host + ':' + str(supplier1.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m3_m1_agmt = master3.agreement.create(suffix=SUFFIX, host=master1.host,
-                                          port=master1.port,
+    m3_m1_agmt = supplier3.agreement.create(suffix=SUFFIX, host=supplier1.host,
+                                          port=supplier1.port,
                                           properties=properties)
     if not m3_m1_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m3_m1_agmt)
 
-    # Creating agreement from master 3 to master 2
-    properties = {RA_NAME: 'meTo_' + master2.host + ':' + str(master2.port),
+    # Creating agreement from supplier 3 to supplier 2
+    properties = {RA_NAME: 'meTo_' + supplier2.host + ':' + str(supplier2.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m3_m2_agmt = master3.agreement.create(suffix=SUFFIX, host=master2.host,
-                                          port=master2.port,
+    m3_m2_agmt = supplier3.agreement.create(suffix=SUFFIX, host=supplier2.host,
+                                          port=supplier2.port,
                                           properties=properties)
     if not m3_m2_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m3_m2_agmt)
 
-    # Creating agreement from master 3 to master 4
-    properties = {RA_NAME: 'meTo_' + master4.host + ':' + str(master4.port),
+    # Creating agreement from supplier 3 to supplier 4
+    properties = {RA_NAME: 'meTo_' + supplier4.host + ':' + str(supplier4.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m3_m4_agmt = master3.agreement.create(suffix=SUFFIX, host=master4.host,
-                                          port=master4.port,
+    m3_m4_agmt = supplier3.agreement.create(suffix=SUFFIX, host=supplier4.host,
+                                          port=supplier4.port,
                                           properties=properties)
     if not m3_m4_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m3_m4_agmt)
 
-    # Creating agreement from master 3 to hub 1
+    # Creating agreement from supplier 3 to hub 1
     properties = {RA_NAME: 'meTo_' + hub1.host + ':' + str(hub1.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m3_h1_agmt = master3.agreement.create(suffix=SUFFIX, host=hub1.host,
+    m3_h1_agmt = supplier3.agreement.create(suffix=SUFFIX, host=hub1.host,
                                           port=hub1.port,
                                           properties=properties)
     if not m3_h1_agmt:
-        log.fatal("Fail to create a master -> hub replica agreement")
+        log.fatal("Fail to create a supplier -> hub replica agreement")
         sys.exit(1)
     log.debug("%s created" % m3_h1_agmt)
 
-    # Creating agreement from master 3 to hub 2
+    # Creating agreement from supplier 3 to hub 2
     properties = {RA_NAME: 'meTo_' + hub2.host + ':' + str(hub2.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m3_h2_agmt = master3.agreement.create(suffix=SUFFIX, host=hub2.host,
+    m3_h2_agmt = supplier3.agreement.create(suffix=SUFFIX, host=hub2.host,
                                           port=hub2.port,
                                           properties=properties)
     if not m3_h2_agmt:
-        log.fatal("Fail to create a master -> hub replica agreement")
+        log.fatal("Fail to create a supplier -> hub replica agreement")
         sys.exit(1)
     log.debug("%s created" % m3_h2_agmt)
 
-    # Creating agreement from master 4 to master 1
-    properties = {RA_NAME: 'meTo_' + master1.host + ':' + str(master1.port),
+    # Creating agreement from supplier 4 to supplier 1
+    properties = {RA_NAME: 'meTo_' + supplier1.host + ':' + str(supplier1.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m4_m1_agmt = master4.agreement.create(suffix=SUFFIX, host=master1.host,
-                                          port=master1.port,
+    m4_m1_agmt = supplier4.agreement.create(suffix=SUFFIX, host=supplier1.host,
+                                          port=supplier1.port,
                                           properties=properties)
     if not m4_m1_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m4_m1_agmt)
 
-    # Creating agreement from master 4 to master 2
-    properties = {RA_NAME: 'meTo_' + master2.host + ':' + str(master2.port),
+    # Creating agreement from supplier 4 to supplier 2
+    properties = {RA_NAME: 'meTo_' + supplier2.host + ':' + str(supplier2.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m4_m2_agmt = master4.agreement.create(suffix=SUFFIX, host=master2.host,
-                                          port=master2.port,
+    m4_m2_agmt = supplier4.agreement.create(suffix=SUFFIX, host=supplier2.host,
+                                          port=supplier2.port,
                                           properties=properties)
     if not m4_m2_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m4_m2_agmt)
 
-    # Creating agreement from master 4 to master 3
-    properties = {RA_NAME: 'meTo_' + master3.host + ':' + str(master3.port),
+    # Creating agreement from supplier 4 to supplier 3
+    properties = {RA_NAME: 'meTo_' + supplier3.host + ':' + str(supplier3.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m4_m3_agmt = master4.agreement.create(suffix=SUFFIX, host=master3.host,
-                                          port=master3.port,
+    m4_m3_agmt = supplier4.agreement.create(suffix=SUFFIX, host=supplier3.host,
+                                          port=supplier3.port,
                                           properties=properties)
     if not m4_m3_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m4_m3_agmt)
 
-    # Creating agreement from master 4 to hub 1
+    # Creating agreement from supplier 4 to hub 1
     properties = {RA_NAME: 'meTo_' + hub1.host + ':' + str(hub1.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m4_h1_agmt = master4.agreement.create(suffix=SUFFIX, host=hub1.host,
+    m4_h1_agmt = supplier4.agreement.create(suffix=SUFFIX, host=hub1.host,
                                           port=hub1.port,
                                           properties=properties)
     if not m4_h1_agmt:
-        log.fatal("Fail to create a master -> hub replica agreement")
+        log.fatal("Fail to create a supplier -> hub replica agreement")
         sys.exit(1)
     log.debug("%s created" % m4_h1_agmt)
 
-    # Creating agreement from master 4 to hub 2
+    # Creating agreement from supplier 4 to hub 2
     properties = {RA_NAME: 'meTo_' + hub2.host + ':' + str(hub2.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m4_h2_agmt = master4.agreement.create(suffix=SUFFIX, host=hub2.host,
+    m4_h2_agmt = supplier4.agreement.create(suffix=SUFFIX, host=hub2.host,
                                           port=hub2.port,
                                           properties=properties)
     if not m4_h2_agmt:
-        log.fatal("Fail to create a master -> hub replica agreement")
+        log.fatal("Fail to create a supplier -> hub replica agreement")
         sys.exit(1)
     log.debug("%s created" % m4_h2_agmt)
 
@@ -671,14 +671,14 @@ def topology(request):
     #
     # Initialize all the agreements
     #
-    master1.agreement.init(SUFFIX, HOST_MASTER_2, PORT_MASTER_2)
-    master1.waitForReplInit(m1_m2_agmt)
-    master1.agreement.init(SUFFIX, HOST_MASTER_3, PORT_MASTER_3)
-    master1.waitForReplInit(m1_m3_agmt)
-    master1.agreement.init(SUFFIX, HOST_MASTER_4, PORT_MASTER_4)
-    master1.waitForReplInit(m1_m4_agmt)
-    master1.agreement.init(SUFFIX, HOST_HUB_1, PORT_HUB_1)
-    master1.waitForReplInit(m1_h1_agmt)
+    supplier1.agreement.init(SUFFIX, HOST_SUPPLIER_2, PORT_SUPPLIER_2)
+    supplier1.waitForReplInit(m1_m2_agmt)
+    supplier1.agreement.init(SUFFIX, HOST_SUPPLIER_3, PORT_SUPPLIER_3)
+    supplier1.waitForReplInit(m1_m3_agmt)
+    supplier1.agreement.init(SUFFIX, HOST_SUPPLIER_4, PORT_SUPPLIER_4)
+    supplier1.waitForReplInit(m1_m4_agmt)
+    supplier1.agreement.init(SUFFIX, HOST_HUB_1, PORT_HUB_1)
+    supplier1.waitForReplInit(m1_h1_agmt)
     hub1.agreement.init(SUFFIX, HOST_CONSUMER_1, PORT_CONSUMER_1)
     hub1.waitForReplInit(h1_c1_agmt)
     hub1.agreement.init(SUFFIX, HOST_CONSUMER_2, PORT_CONSUMER_2)
@@ -687,11 +687,11 @@ def topology(request):
     hub1.waitForReplInit(h1_c3_agmt)
     hub1.agreement.init(SUFFIX, HOST_CONSUMER_4, PORT_CONSUMER_4)
     hub1.waitForReplInit(h1_c4_agmt)
-    master1.agreement.init(SUFFIX, HOST_HUB_2, PORT_HUB_2)
-    master1.waitForReplInit(m1_h2_agmt)
+    supplier1.agreement.init(SUFFIX, HOST_HUB_2, PORT_HUB_2)
+    supplier1.waitForReplInit(m1_h2_agmt)
 
     # Check replication is working...
-    if master1.testReplication(DEFAULT_SUFFIX, consumer1):
+    if supplier1.testReplication(DEFAULT_SUFFIX, consumer1):
         log.info('Replication is working.')
     else:
         log.fatal('Replication is not working.')
@@ -702,10 +702,10 @@ def topology(request):
         them
         """
         if DEBUGGING:
-            master1.stop()
-            master2.stop()
-            master3.stop()
-            master4.stop()
+            supplier1.stop()
+            supplier2.stop()
+            supplier3.stop()
+            supplier4.stop()
             hub1.stop()
             hub2.stop()
             consumer1.stop()
@@ -713,10 +713,10 @@ def topology(request):
             consumer3.stop()
             consumer4.stop()
         else:
-            master1.delete()
-            master2.delete()
-            master3.delete()
-            master4.delete()
+            supplier1.delete()
+            supplier2.delete()
+            supplier3.delete()
+            supplier4.delete()
             hub1.delete()
             hub2.delete()
             consumer1.delete()
@@ -725,7 +725,7 @@ def topology(request):
             consumer4.delete()
     request.addfinalizer(fin)
 
-    return TopologyReplication(master1, master2, master3, master4, hub1, hub2,
+    return TopologyReplication(supplier1, supplier2, supplier3, supplier4, hub1, hub2,
                                consumer1, consumer2, consumer3, consumer4)
 
 
@@ -776,11 +776,11 @@ class AddDelUsers(threading.Thread):
 
 
 def measureConvergence(topology):
-    """Find and measure the convergence of entries from each master
+    """Find and measure the convergence of entries from each supplier
     """
 
-    replicas = [topology.master1, topology.master2, topology.master3,
-                topology.master4, topology.hub1, topology.hub2,
+    replicas = [topology.supplier1, topology.supplier2, topology.supplier3,
+                topology.supplier4, topology.hub1, topology.hub2,
                 topology.consumer1, topology.consumer2, topology.consumer3,
                 topology.consumer4]
 
@@ -789,39 +789,39 @@ def measureConvergence(topology):
     else:
         interval = 1
 
-    for master in [('1', topology.master1),
-                   ('2', topology.master2),
-                   ('3', topology.master3),
-                   ('4', topology.master4)]:
+    for supplier in [('1', topology.supplier1),
+                   ('2', topology.supplier2),
+                   ('3', topology.supplier3),
+                   ('4', topology.supplier4)]:
         # Start with the first entry
-        entries = ['ADD dn="uid=master_%s-0,%s' %
-                   (master[0], DEFAULT_SUFFIX)]
+        entries = ['ADD dn="uid=supplier_%s-0,%s' %
+                   (supplier[0], DEFAULT_SUFFIX)]
 
         # Add incremental entries to the list
         idx = interval
         while idx < ADD_DEL_COUNT:
-            entries.append('ADD dn="uid=master_%s-%d,%s' %
-                         (master[0], idx, DEFAULT_SUFFIX))
+            entries.append('ADD dn="uid=supplier_%s-%d,%s' %
+                         (supplier[0], idx, DEFAULT_SUFFIX))
             idx += interval
 
         # Add the last entry to the list (if it was not already added)
         if idx != (ADD_DEL_COUNT - 1):
-            entries.append('ADD dn="uid=master_%s-%d,%s' %
-                           (master[0], (ADD_DEL_COUNT - 1),
+            entries.append('ADD dn="uid=supplier_%s-%d,%s' %
+                           (supplier[0], (ADD_DEL_COUNT - 1),
                            DEFAULT_SUFFIX))
 
-        ReplTools.replConvReport(DEFAULT_SUFFIX, entries, master[1], replicas)
+        ReplTools.replConvReport(DEFAULT_SUFFIX, entries, supplier[1], replicas)
 
 
 def test_MMR_Integrity(topology):
-    """Apply load to 4 masters at the same time.  Perform adds and deletes.
+    """Apply load to 4 suppliers at the same time.  Perform adds and deletes.
     If any updates are missed we will see an error 32 in the access logs or
     we will have entries left over once the test completes.
     """
     loop = 0
 
-    ALL_REPLICAS = [topology.master1, topology.master2, topology.master3,
-                    topology.master4,
+    ALL_REPLICAS = [topology.supplier1, topology.supplier2, topology.supplier3,
+                    topology.supplier4,
                     topology.hub1, topology.hub2,
                     topology.consumer1, topology.consumer2,
                     topology.consumer3, topology.consumer4]
@@ -858,13 +858,13 @@ def test_MMR_Integrity(topology):
         # Fire off 4 threads to apply the load
         log.info("Start adding/deleting: " + getDateTime())
         startTime = time.time()
-        add_del_m1 = AddDelUsers(topology.master1)
+        add_del_m1 = AddDelUsers(topology.supplier1)
         add_del_m1.start()
-        add_del_m2 = AddDelUsers(topology.master2)
+        add_del_m2 = AddDelUsers(topology.supplier2)
         add_del_m2.start()
-        add_del_m3 = AddDelUsers(topology.master3)
+        add_del_m3 = AddDelUsers(topology.supplier3)
         add_del_m3.start()
-        add_del_m4 = AddDelUsers(topology.master4)
+        add_del_m4 = AddDelUsers(topology.supplier4)
         add_del_m4.start()
 
         # Wait for threads to finish sending their updates
@@ -903,8 +903,8 @@ def test_MMR_Integrity(topology):
                 break
             else:
                 # Check if replication is idle
-                replicas = [topology.master1, topology.master2,
-                            topology.master3, topology.master4,
+                replicas = [topology.supplier1, topology.supplier2,
+                            topology.supplier3, topology.supplier4,
                             topology.hub1, topology.hub2]
                 if ReplTools.replIdle(replicas, DEFAULT_SUFFIX):
                     # Replication is idle - wait 30 secs for access log buffer

+ 176 - 176
dirsrvtests/tests/stress/replication/mmr_01_4m_test.py

@@ -22,280 +22,280 @@ ADD_DEL_COUNT = 50000
 MAX_LOOPS = 2
 TEST_CONVERGE_LATENCY = True
 CONVERGENCE_TIMEOUT = '60'
-master_list = []
+supplier_list = []
 hub_list = []
 con_list = []
 TEST_START = time.time()
 
 LAST_DN_IDX = ADD_DEL_COUNT - 1
-LAST_DN_M1 = 'DEL dn="uid=master_1-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
-LAST_DN_M2 = 'DEL dn="uid=master_2-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
-LAST_DN_M3 = 'DEL dn="uid=master_3-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
-LAST_DN_M4 = 'DEL dn="uid=master_4-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
+LAST_DN_M1 = 'DEL dn="uid=supplier_1-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
+LAST_DN_M2 = 'DEL dn="uid=supplier_2-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
+LAST_DN_M3 = 'DEL dn="uid=supplier_3-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
+LAST_DN_M4 = 'DEL dn="uid=supplier_4-%d,%s' % (LAST_DN_IDX, DEFAULT_SUFFIX)
 
 
 class TopologyReplication(object):
     """The Replication Topology Class"""
-    def __init__(self, master1, master2, master3, master4):
+    def __init__(self, supplier1, supplier2, supplier3, supplier4):
         """Init"""
-        master1.open()
-        self.master1 = master1
-        master2.open()
-        self.master2 = master2
-        master3.open()
-        self.master3 = master3
-        master4.open()
-        self.master4 = master4
+        supplier1.open()
+        self.supplier1 = supplier1
+        supplier2.open()
+        self.supplier2 = supplier2
+        supplier3.open()
+        self.supplier3 = supplier3
+        supplier4.open()
+        self.supplier4 = supplier4
 
 
 @pytest.fixture(scope="module")
 def topology(request):
     """Create Replication Deployment"""
 
-    # Creating master 1...
+    # Creating supplier 1...
     if DEBUGGING:
-        master1 = DirSrv(verbose=True)
+        supplier1 = DirSrv(verbose=True)
     else:
-        master1 = DirSrv(verbose=False)
-    args_instance[SER_HOST] = HOST_MASTER_1
-    args_instance[SER_PORT] = PORT_MASTER_1
-    args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_1
+        supplier1 = DirSrv(verbose=False)
+    args_instance[SER_HOST] = HOST_SUPPLIER_1
+    args_instance[SER_PORT] = PORT_SUPPLIER_1
+    args_instance[SER_SERVERID_PROP] = SERVERID_SUPPLIER_1
     args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX
-    args_master = args_instance.copy()
-    master1.allocate(args_master)
-    instance_master1 = master1.exists()
-    if instance_master1:
-        master1.delete()
-    master1.create()
-    master1.open()
-    master1.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.MASTER,
-                                      replicaId=REPLICAID_MASTER_1)
-
-    # Creating master 2...
+    args_supplier = args_instance.copy()
+    supplier1.allocate(args_supplier)
+    instance_supplier1 = supplier1.exists()
+    if instance_supplier1:
+        supplier1.delete()
+    supplier1.create()
+    supplier1.open()
+    supplier1.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.SUPPLIER,
+                                      replicaId=REPLICAID_SUPPLIER_1)
+
+    # Creating supplier 2...
     if DEBUGGING:
-        master2 = DirSrv(verbose=True)
+        supplier2 = DirSrv(verbose=True)
     else:
-        master2 = DirSrv(verbose=False)
-    args_instance[SER_HOST] = HOST_MASTER_2
-    args_instance[SER_PORT] = PORT_MASTER_2
-    args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_2
+        supplier2 = DirSrv(verbose=False)
+    args_instance[SER_HOST] = HOST_SUPPLIER_2
+    args_instance[SER_PORT] = PORT_SUPPLIER_2
+    args_instance[SER_SERVERID_PROP] = SERVERID_SUPPLIER_2
     args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX
-    args_master = args_instance.copy()
-    master2.allocate(args_master)
-    instance_master2 = master2.exists()
-    if instance_master2:
-        master2.delete()
-    master2.create()
-    master2.open()
-    master2.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.MASTER,
-                                      replicaId=REPLICAID_MASTER_2)
-
-    # Creating master 3...
+    args_supplier = args_instance.copy()
+    supplier2.allocate(args_supplier)
+    instance_supplier2 = supplier2.exists()
+    if instance_supplier2:
+        supplier2.delete()
+    supplier2.create()
+    supplier2.open()
+    supplier2.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.SUPPLIER,
+                                      replicaId=REPLICAID_SUPPLIER_2)
+
+    # Creating supplier 3...
     if DEBUGGING:
-        master3 = DirSrv(verbose=True)
+        supplier3 = DirSrv(verbose=True)
     else:
-        master3 = DirSrv(verbose=False)
-    args_instance[SER_HOST] = HOST_MASTER_3
-    args_instance[SER_PORT] = PORT_MASTER_3
-    args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_3
+        supplier3 = DirSrv(verbose=False)
+    args_instance[SER_HOST] = HOST_SUPPLIER_3
+    args_instance[SER_PORT] = PORT_SUPPLIER_3
+    args_instance[SER_SERVERID_PROP] = SERVERID_SUPPLIER_3
     args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX
-    args_master = args_instance.copy()
-    master3.allocate(args_master)
-    instance_master3 = master3.exists()
-    if instance_master3:
-        master3.delete()
-    master3.create()
-    master3.open()
-    master3.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.MASTER,
-                                      replicaId=REPLICAID_MASTER_3)
-
-    # Creating master 4...
+    args_supplier = args_instance.copy()
+    supplier3.allocate(args_supplier)
+    instance_supplier3 = supplier3.exists()
+    if instance_supplier3:
+        supplier3.delete()
+    supplier3.create()
+    supplier3.open()
+    supplier3.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.SUPPLIER,
+                                      replicaId=REPLICAID_SUPPLIER_3)
+
+    # Creating supplier 4...
     if DEBUGGING:
-        master4 = DirSrv(verbose=True)
+        supplier4 = DirSrv(verbose=True)
     else:
-        master4 = DirSrv(verbose=False)
-    args_instance[SER_HOST] = HOST_MASTER_4
-    args_instance[SER_PORT] = PORT_MASTER_4
-    args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_4
+        supplier4 = DirSrv(verbose=False)
+    args_instance[SER_HOST] = HOST_SUPPLIER_4
+    args_instance[SER_PORT] = PORT_SUPPLIER_4
+    args_instance[SER_SERVERID_PROP] = SERVERID_SUPPLIER_4
     args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX
-    args_master = args_instance.copy()
-    master4.allocate(args_master)
-    instance_master4 = master4.exists()
-    if instance_master4:
-        master4.delete()
-    master4.create()
-    master4.open()
-    master4.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.MASTER,
-                                      replicaId=REPLICAID_MASTER_4)
+    args_supplier = args_instance.copy()
+    supplier4.allocate(args_supplier)
+    instance_supplier4 = supplier4.exists()
+    if instance_supplier4:
+        supplier4.delete()
+    supplier4.create()
+    supplier4.open()
+    supplier4.replica.enableReplication(suffix=SUFFIX, role=ReplicaRole.SUPPLIER,
+                                      replicaId=REPLICAID_SUPPLIER_4)
 
     #
     # Create all the agreements
     #
-    # Creating agreement from master 1 to master 2
-    properties = {RA_NAME: 'meTo_' + master2.host + ':' + str(master2.port),
+    # Creating agreement from supplier 1 to supplier 2
+    properties = {RA_NAME: 'meTo_' + supplier2.host + ':' + str(supplier2.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m1_m2_agmt = master1.agreement.create(suffix=SUFFIX, host=master2.host,
-                                          port=master2.port,
+    m1_m2_agmt = supplier1.agreement.create(suffix=SUFFIX, host=supplier2.host,
+                                          port=supplier2.port,
                                           properties=properties)
     if not m1_m2_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m1_m2_agmt)
 
-    # Creating agreement from master 1 to master 3
-    properties = {RA_NAME: 'meTo_' + master3.host + ':' + str(master3.port),
+    # Creating agreement from supplier 1 to supplier 3
+    properties = {RA_NAME: 'meTo_' + supplier3.host + ':' + str(supplier3.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m1_m3_agmt = master1.agreement.create(suffix=SUFFIX, host=master3.host,
-                                          port=master3.port,
+    m1_m3_agmt = supplier1.agreement.create(suffix=SUFFIX, host=supplier3.host,
+                                          port=supplier3.port,
                                           properties=properties)
     if not m1_m3_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m1_m3_agmt)
 
-    # Creating agreement from master 1 to master 4
-    properties = {RA_NAME: 'meTo_' + master4.host + ':' + str(master4.port),
+    # Creating agreement from supplier 1 to supplier 4
+    properties = {RA_NAME: 'meTo_' + supplier4.host + ':' + str(supplier4.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m1_m4_agmt = master1.agreement.create(suffix=SUFFIX, host=master4.host,
-                                          port=master4.port,
+    m1_m4_agmt = supplier1.agreement.create(suffix=SUFFIX, host=supplier4.host,
+                                          port=supplier4.port,
                                           properties=properties)
     if not m1_m4_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m1_m4_agmt)
 
-    # Creating agreement from master 2 to master 1
-    properties = {RA_NAME: 'meTo_' + master1.host + ':' + str(master1.port),
+    # Creating agreement from supplier 2 to supplier 1
+    properties = {RA_NAME: 'meTo_' + supplier1.host + ':' + str(supplier1.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m2_m1_agmt = master2.agreement.create(suffix=SUFFIX, host=master1.host,
-                                          port=master1.port,
+    m2_m1_agmt = supplier2.agreement.create(suffix=SUFFIX, host=supplier1.host,
+                                          port=supplier1.port,
                                           properties=properties)
     if not m2_m1_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m2_m1_agmt)
 
-    # Creating agreement from master 2 to master 3
-    properties = {RA_NAME: 'meTo_' + master3.host + ':' + str(master3.port),
+    # Creating agreement from supplier 2 to supplier 3
+    properties = {RA_NAME: 'meTo_' + supplier3.host + ':' + str(supplier3.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m2_m3_agmt = master2.agreement.create(suffix=SUFFIX, host=master3.host,
-                                          port=master3.port,
+    m2_m3_agmt = supplier2.agreement.create(suffix=SUFFIX, host=supplier3.host,
+                                          port=supplier3.port,
                                           properties=properties)
     if not m2_m3_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m2_m3_agmt)
 
-    # Creating agreement from master 2 to master 4
-    properties = {RA_NAME: 'meTo_' + master4.host + ':' + str(master4.port),
+    # Creating agreement from supplier 2 to supplier 4
+    properties = {RA_NAME: 'meTo_' + supplier4.host + ':' + str(supplier4.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m2_m4_agmt = master2.agreement.create(suffix=SUFFIX, host=master4.host,
-                                          port=master4.port,
+    m2_m4_agmt = supplier2.agreement.create(suffix=SUFFIX, host=supplier4.host,
+                                          port=supplier4.port,
                                           properties=properties)
     if not m2_m4_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m2_m4_agmt)
 
-    # Creating agreement from master 3 to master 1
-    properties = {RA_NAME: 'meTo_' + master1.host + ':' + str(master1.port),
+    # Creating agreement from supplier 3 to supplier 1
+    properties = {RA_NAME: 'meTo_' + supplier1.host + ':' + str(supplier1.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m3_m1_agmt = master3.agreement.create(suffix=SUFFIX, host=master1.host,
-                                          port=master1.port,
+    m3_m1_agmt = supplier3.agreement.create(suffix=SUFFIX, host=supplier1.host,
+                                          port=supplier1.port,
                                           properties=properties)
     if not m3_m1_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m3_m1_agmt)
 
-    # Creating agreement from master 3 to master 2
-    properties = {RA_NAME: 'meTo_' + master2.host + ':' + str(master2.port),
+    # Creating agreement from supplier 3 to supplier 2
+    properties = {RA_NAME: 'meTo_' + supplier2.host + ':' + str(supplier2.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m3_m2_agmt = master3.agreement.create(suffix=SUFFIX, host=master2.host,
-                                          port=master2.port,
+    m3_m2_agmt = supplier3.agreement.create(suffix=SUFFIX, host=supplier2.host,
+                                          port=supplier2.port,
                                           properties=properties)
     if not m3_m2_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m3_m2_agmt)
 
-    # Creating agreement from master 3 to master 4
-    properties = {RA_NAME: 'meTo_' + master4.host + ':' + str(master4.port),
+    # Creating agreement from supplier 3 to supplier 4
+    properties = {RA_NAME: 'meTo_' + supplier4.host + ':' + str(supplier4.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m3_m4_agmt = master3.agreement.create(suffix=SUFFIX, host=master4.host,
-                                          port=master4.port,
+    m3_m4_agmt = supplier3.agreement.create(suffix=SUFFIX, host=supplier4.host,
+                                          port=supplier4.port,
                                           properties=properties)
     if not m3_m4_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m3_m4_agmt)
 
-    # Creating agreement from master 4 to master 1
-    properties = {RA_NAME: 'meTo_' + master1.host + ':' + str(master1.port),
+    # Creating agreement from supplier 4 to supplier 1
+    properties = {RA_NAME: 'meTo_' + supplier1.host + ':' + str(supplier1.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m4_m1_agmt = master4.agreement.create(suffix=SUFFIX, host=master1.host,
-                                          port=master1.port,
+    m4_m1_agmt = supplier4.agreement.create(suffix=SUFFIX, host=supplier1.host,
+                                          port=supplier1.port,
                                           properties=properties)
     if not m4_m1_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m4_m1_agmt)
 
-    # Creating agreement from master 4 to master 2
-    properties = {RA_NAME: 'meTo_' + master2.host + ':' + str(master2.port),
+    # Creating agreement from supplier 4 to supplier 2
+    properties = {RA_NAME: 'meTo_' + supplier2.host + ':' + str(supplier2.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m4_m2_agmt = master4.agreement.create(suffix=SUFFIX, host=master2.host,
-                                          port=master2.port,
+    m4_m2_agmt = supplier4.agreement.create(suffix=SUFFIX, host=supplier2.host,
+                                          port=supplier2.port,
                                           properties=properties)
     if not m4_m2_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m4_m2_agmt)
 
-    # Creating agreement from master 4 to master 3
-    properties = {RA_NAME: 'meTo_' + master3.host + ':' + str(master3.port),
+    # Creating agreement from supplier 4 to supplier 3
+    properties = {RA_NAME: 'meTo_' + supplier3.host + ':' + str(supplier3.port),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m4_m3_agmt = master4.agreement.create(suffix=SUFFIX, host=master3.host,
-                                          port=master3.port,
+    m4_m3_agmt = supplier4.agreement.create(suffix=SUFFIX, host=supplier3.host,
+                                          port=supplier3.port,
                                           properties=properties)
     if not m4_m3_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m4_m3_agmt)
 
@@ -305,15 +305,15 @@ def topology(request):
     #
     # Initialize all the agreements
     #
-    master1.agreement.init(SUFFIX, HOST_MASTER_2, PORT_MASTER_2)
-    master1.waitForReplInit(m1_m2_agmt)
-    master1.agreement.init(SUFFIX, HOST_MASTER_3, PORT_MASTER_3)
-    master1.waitForReplInit(m1_m3_agmt)
-    master1.agreement.init(SUFFIX, HOST_MASTER_4, PORT_MASTER_4)
-    master1.waitForReplInit(m1_m4_agmt)
+    supplier1.agreement.init(SUFFIX, HOST_SUPPLIER_2, PORT_SUPPLIER_2)
+    supplier1.waitForReplInit(m1_m2_agmt)
+    supplier1.agreement.init(SUFFIX, HOST_SUPPLIER_3, PORT_SUPPLIER_3)
+    supplier1.waitForReplInit(m1_m3_agmt)
+    supplier1.agreement.init(SUFFIX, HOST_SUPPLIER_4, PORT_SUPPLIER_4)
+    supplier1.waitForReplInit(m1_m4_agmt)
 
     # Check replication is working...
-    if master1.testReplication(DEFAULT_SUFFIX, master4):
+    if supplier1.testReplication(DEFAULT_SUFFIX, supplier4):
         log.info('Replication is working.')
     else:
         log.fatal('Replication is not working.')
@@ -324,18 +324,18 @@ def topology(request):
         them
         """
         if 1 or DEBUGGING:
-            master1.stop()
-            master2.stop()
-            master3.stop()
-            master4.stop()
+            supplier1.stop()
+            supplier2.stop()
+            supplier3.stop()
+            supplier4.stop()
         else:
-            master1.delete()
-            master2.delete()
-            master3.delete()
-            master4.delete()
+            supplier1.delete()
+            supplier2.delete()
+            supplier3.delete()
+            supplier4.delete()
     request.addfinalizer(fin)
 
-    return TopologyReplication(master1, master2, master3, master4)
+    return TopologyReplication(supplier1, supplier2, supplier3, supplier4)
 
 
 class AddDelUsers(threading.Thread):
@@ -385,50 +385,50 @@ class AddDelUsers(threading.Thread):
 
 
 def measureConvergence(topology):
-    """Find and measure the convergence of entries from each master
+    """Find and measure the convergence of entries from each supplier
     """
 
-    replicas = [topology.master1, topology.master2, topology.master3,
-                topology.master4]
+    replicas = [topology.supplier1, topology.supplier2, topology.supplier3,
+                topology.supplier4]
 
     if ADD_DEL_COUNT > 10:
         interval = int(ADD_DEL_COUNT / 10)
     else:
         interval = 1
 
-    for master in [('1', topology.master1),
-                   ('2', topology.master2),
-                   ('3', topology.master3),
-                   ('4', topology.master4)]:
+    for supplier in [('1', topology.supplier1),
+                   ('2', topology.supplier2),
+                   ('3', topology.supplier3),
+                   ('4', topology.supplier4)]:
         # Start with the first entry
-        entries = ['ADD dn="uid=master_%s-0,%s' %
-                   (master[0], DEFAULT_SUFFIX)]
+        entries = ['ADD dn="uid=supplier_%s-0,%s' %
+                   (supplier[0], DEFAULT_SUFFIX)]
 
         # Add incremental entries to the list
         idx = interval
         while idx < ADD_DEL_COUNT:
-            entries.append('ADD dn="uid=master_%s-%d,%s' %
-                         (master[0], idx, DEFAULT_SUFFIX))
+            entries.append('ADD dn="uid=supplier_%s-%d,%s' %
+                         (supplier[0], idx, DEFAULT_SUFFIX))
             idx += interval
 
         # Add the last entry to the list (if it was not already added)
         if idx != (ADD_DEL_COUNT - 1):
-            entries.append('ADD dn="uid=master_%s-%d,%s' %
-                           (master[0], (ADD_DEL_COUNT - 1),
+            entries.append('ADD dn="uid=supplier_%s-%d,%s' %
+                           (supplier[0], (ADD_DEL_COUNT - 1),
                            DEFAULT_SUFFIX))
 
-        ReplTools.replConvReport(DEFAULT_SUFFIX, entries, master[1], replicas)
+        ReplTools.replConvReport(DEFAULT_SUFFIX, entries, supplier[1], replicas)
 
 
 def test_MMR_Integrity(topology):
-    """Apply load to 4 masters at the same time.  Perform adds and deletes.
+    """Apply load to 4 suppliers at the same time.  Perform adds and deletes.
     If any updates are missed we will see an error 32 in the access logs or
     we will have entries left over once the test completes.
     """
     loop = 0
 
-    ALL_REPLICAS = [topology.master1, topology.master2, topology.master3,
-                    topology.master4]
+    ALL_REPLICAS = [topology.supplier1, topology.supplier2, topology.supplier3,
+                    topology.supplier4]
 
     if TEST_CONVERGE_LATENCY:
         try:
@@ -462,13 +462,13 @@ def test_MMR_Integrity(topology):
         # Fire off 4 threads to apply the load
         log.info("Start adding/deleting: " + getDateTime())
         startTime = time.time()
-        add_del_m1 = AddDelUsers(topology.master1)
+        add_del_m1 = AddDelUsers(topology.supplier1)
         add_del_m1.start()
-        add_del_m2 = AddDelUsers(topology.master2)
+        add_del_m2 = AddDelUsers(topology.supplier2)
         add_del_m2.start()
-        add_del_m3 = AddDelUsers(topology.master3)
+        add_del_m3 = AddDelUsers(topology.supplier3)
         add_del_m3.start()
-        add_del_m4 = AddDelUsers(topology.master4)
+        add_del_m4 = AddDelUsers(topology.supplier4)
         add_del_m4.start()
 
         # Wait for threads to finish sending their updates
@@ -507,8 +507,8 @@ def test_MMR_Integrity(topology):
                 break
             else:
                 # Check if replication is idle
-                replicas = [topology.master1, topology.master2,
-                            topology.master3, topology.master4]
+                replicas = [topology.supplier1, topology.supplier2,
+                            topology.supplier3, topology.supplier4]
                 if ReplTools.replIdle(replicas, DEFAULT_SUFFIX):
                     # Replication is idle - wait 30 secs for access log buffer
                     time.sleep(30)

+ 159 - 159
dirsrvtests/tests/suites/acl/acl_test.py

@@ -60,7 +60,7 @@ def add_attr(topology_m2, attr_name):
     ATTR_VALUE = """(NAME '%s' \
                     DESC 'Attribute filteri-Multi-Valued' \
                     SYNTAX 1.3.6.1.4.1.1466.115.121.1.27)""" % attr_name
-    schema = Schema(topology_m2.ms["master1"])
+    schema = Schema(topology_m2.ms["supplier1"])
     schema.add('attributeTypes', ATTR_VALUE)
 
 
@@ -71,7 +71,7 @@ def aci_with_attr_subtype(request, topology_m2):
     TARGET_ATTR = 'protectedOperation'
     USER_ATTR = 'allowedToPerform'
     SUBTYPE = request.param
-    suffix = Domain(topology_m2.ms["master1"], DEFAULT_SUFFIX)
+    suffix = Domain(topology_m2.ms["supplier1"], DEFAULT_SUFFIX)
 
     log.info("========Executing test with '%s' subtype========" % SUBTYPE)
     log.info("        Add a target attribute")
@@ -103,7 +103,7 @@ def test_aci_attr_subtype_targetattr(topology_m2, aci_with_attr_subtype):
 
     :id: a99ccda0-5d0b-4d41-99cc-c5e207b3b687
     :parametrized: yes
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             Define two attributes in the schema - targetattr and userattr,
             Add an ACI with attribute subtypes - "lang-ja", "binary", "phonetic"
             one by one
@@ -117,7 +117,7 @@ def test_aci_attr_subtype_targetattr(topology_m2, aci_with_attr_subtype):
 
     log.info("Search for the added attribute")
     try:
-        entries = topology_m2.ms["master1"].search_s(DEFAULT_SUFFIX,
+        entries = topology_m2.ms["supplier1"].search_s(DEFAULT_SUFFIX,
                                                      ldap.SCOPE_BASE,
                                                      '(objectclass=*)', ['aci'])
         entry = str(entries[0])
@@ -130,14 +130,14 @@ def test_aci_attr_subtype_targetattr(topology_m2, aci_with_attr_subtype):
 
 
 def _bind_manager(topology_m2):
-    topology_m2.ms["master1"].log.info("Bind as %s " % DN_DM)
-    topology_m2.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].log.info("Bind as %s " % DN_DM)
+    topology_m2.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
 
 
 def _bind_normal(topology_m2):
     # bind as bind_entry
-    topology_m2.ms["master1"].log.info("Bind as %s" % BIND_DN)
-    topology_m2.ms["master1"].simple_bind_s(BIND_DN, BIND_PW)
+    topology_m2.ms["supplier1"].log.info("Bind as %s" % BIND_DN)
+    topology_m2.ms["supplier1"].simple_bind_s(BIND_DN, BIND_PW)
 
 
 def _moddn_aci_deny_tree(topology_m2, mod_type=None,
@@ -156,9 +156,9 @@ def _moddn_aci_deny_tree(topology_m2, mod_type=None,
     ACI_ALLOW = "(version 3.0; acl \"Deny MODDN to prod_except\"; deny (moddn)"
     ACI_SUBJECT = " userdn = \"ldap:///%s\";)" % BIND_DN
     ACI_BODY = ACI_TARGET_TO + ACI_TARGET_FROM + ACI_ALLOW + ACI_SUBJECT
-    # topology_m2.ms["master1"].modify_s(SUFFIX, mod)
-    topology_m2.ms["master1"].log.info("Add a DENY aci under %s " % PROD_EXCEPT_DN)
-    prod_except = OrganizationalRole(topology_m2.ms["master1"], PROD_EXCEPT_DN)
+    # topology_m2.ms["supplier1"].modify_s(SUFFIX, mod)
+    topology_m2.ms["supplier1"].log.info("Add a DENY aci under %s " % PROD_EXCEPT_DN)
+    prod_except = OrganizationalRole(topology_m2.ms["supplier1"], PROD_EXCEPT_DN)
     prod_except.set('aci', ACI_BODY, mod_type)
 
 
@@ -169,7 +169,7 @@ def _write_aci_staging(topology_m2, mod_type=None):
     ACI_ALLOW = "(version 3.0; acl \"write staging entries\"; allow (write)"
     ACI_SUBJECT = " userdn = \"ldap:///%s\";)" % BIND_DN
     ACI_BODY = ACI_TARGET + ACI_ALLOW + ACI_SUBJECT
-    suffix = Domain(topology_m2.ms["master1"], SUFFIX)
+    suffix = Domain(topology_m2.ms["supplier1"], SUFFIX)
     suffix.set('aci', ACI_BODY, mod_type)
 
 
@@ -180,7 +180,7 @@ def _write_aci_production(topology_m2, mod_type=None):
     ACI_ALLOW = "(version 3.0; acl \"write production entries\"; allow (write)"
     ACI_SUBJECT = " userdn = \"ldap:///%s\";)" % BIND_DN
     ACI_BODY = ACI_TARGET + ACI_ALLOW + ACI_SUBJECT
-    suffix = Domain(topology_m2.ms["master1"], SUFFIX)
+    suffix = Domain(topology_m2.ms["supplier1"], SUFFIX)
     suffix.set('aci', ACI_BODY, mod_type)
 
 
@@ -198,7 +198,7 @@ def _moddn_aci_staging_to_production(topology_m2, mod_type=None,
     ACI_ALLOW = "(version 3.0; acl \"MODDN from staging to production\"; allow (moddn)"
     ACI_SUBJECT = " userdn = \"ldap:///%s\";)" % BIND_DN
     ACI_BODY = ACI_TARGET_FROM + ACI_TARGET_TO + ACI_ALLOW + ACI_SUBJECT
-    suffix = Domain(topology_m2.ms["master1"], SUFFIX)
+    suffix = Domain(topology_m2.ms["supplier1"], SUFFIX)
     suffix.set('aci', ACI_BODY, mod_type)
 
     _write_aci_staging(topology_m2, mod_type=mod_type)
@@ -212,7 +212,7 @@ def _moddn_aci_from_production_to_staging(topology_m2, mod_type=None):
     ACI_ALLOW = "(version 3.0; acl \"MODDN from production to staging\"; allow (moddn)"
     ACI_SUBJECT = " userdn = \"ldap:///%s\";)" % BIND_DN
     ACI_BODY = ACI_TARGET + ACI_ALLOW + ACI_SUBJECT
-    suffix = Domain(topology_m2.ms["master1"], SUFFIX)
+    suffix = Domain(topology_m2.ms["supplier1"], SUFFIX)
     suffix.set('aci', ACI_BODY, mod_type)
 
     _write_aci_production(topology_m2, mod_type=mod_type)
@@ -227,7 +227,7 @@ def moddn_setup(topology_m2):
        - enable ACL logging (commented for performance reason)
     """
 
-    m1 = topology_m2.ms["master1"]
+    m1 = topology_m2.ms["supplier1"]
     o_roles = OrganizationalRoles(m1, SUFFIX)
 
     m1.log.info("\n\n######## INITIALIZATION ########\n")
@@ -266,7 +266,7 @@ def moddn_setup(topology_m2):
     # enable acl error logging
     # mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', '128')]
     # m1.modify_s(DN_CONFIG, mod)
-    # topology_m2.ms["master2"].modify_s(DN_CONFIG, mod)
+    # topology_m2.ms["supplier2"].modify_s(DN_CONFIG, mod)
 
     # add dummy entries in the staging DIT
     staging_users = UserAccounts(m1, SUFFIX, rdn="cn={}".format(STAGING_CN))
@@ -281,7 +281,7 @@ def test_mode_default_add_deny(topology_m2, moddn_setup):
     """Tests that the ADD operation fails (no ADD aci on production)
 
     :id: 301d41d3-b8d8-44c5-8eb9-c2d2816b5a4f
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -291,7 +291,7 @@ def test_mode_default_add_deny(topology_m2, moddn_setup):
         1. It should fail due to INSUFFICIENT_ACCESS
     """
 
-    topology_m2.ms["master1"].log.info("\n\n######## mode moddn_aci : ADD (should fail) ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## mode moddn_aci : ADD (should fail) ########\n")
 
     _bind_normal(topology_m2)
 
@@ -299,16 +299,16 @@ def test_mode_default_add_deny(topology_m2, moddn_setup):
     # First try to add an entry in production => INSUFFICIENT_ACCESS
     #
     try:
-        topology_m2.ms["master1"].log.info("Try to add %s" % PRODUCTION_DN)
+        topology_m2.ms["supplier1"].log.info("Try to add %s" % PRODUCTION_DN)
         name = "%s%d" % (NEW_ACCOUNT, 0)
-        topology_m2.ms["master1"].add_s(Entry(("uid=%s,%s" % (name, PRODUCTION_DN), {
+        topology_m2.ms["supplier1"].add_s(Entry(("uid=%s,%s" % (name, PRODUCTION_DN), {
             'objectclass': "top person".split(),
             'sn': name,
             'cn': name,
             'uid': name})))
         assert 0  # this is an error, we should not be allowed to add an entry in production
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
 
@@ -316,7 +316,7 @@ def test_mode_default_delete_deny(topology_m2, moddn_setup):
     """Tests that the DEL operation fails (no 'delete' aci on production)
 
     :id: 5dcb2213-3875-489a-8cb5-ace057120ad6
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -326,19 +326,19 @@ def test_mode_default_delete_deny(topology_m2, moddn_setup):
         1. It should fail due to INSUFFICIENT_ACCESS
     """
 
-    topology_m2.ms["master1"].log.info("\n\n######## DELETE (should fail) ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## DELETE (should fail) ########\n")
 
     _bind_normal(topology_m2)
     #
     # Second try to delete an entry in staging => INSUFFICIENT_ACCESS
     #
     try:
-        topology_m2.ms["master1"].log.info("Try to delete %s" % STAGING_DN)
+        topology_m2.ms["supplier1"].log.info("Try to delete %s" % STAGING_DN)
         name = "%s%d" % (NEW_ACCOUNT, 0)
-        topology_m2.ms["master1"].delete_s("uid=%s,%s" % (name, STAGING_DN))
+        topology_m2.ms["supplier1"].delete_s("uid=%s,%s" % (name, STAGING_DN))
         assert 0  # this is an error, we should not be allowed to add an entry in production
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
 
@@ -359,7 +359,7 @@ def test_moddn_staging_prod(topology_m2, moddn_setup,
 
     :id: cbafdd68-64d6-431f-9f22-6fbf9ed23ca0
     :parametrized: yes
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -373,7 +373,7 @@ def test_moddn_staging_prod(topology_m2, moddn_setup,
         2. It should pass due to appropriate ACI
     """
 
-    topology_m2.ms["master1"].log.info("\n\n######## MOVE staging -> Prod (%s) ########\n" % index)
+    topology_m2.ms["supplier1"].log.info("\n\n######## MOVE staging -> Prod (%s) ########\n" % index)
     _bind_normal(topology_m2)
 
     old_rdn = "uid=%s%s" % (NEW_ACCOUNT, index)
@@ -385,28 +385,28 @@ def test_moddn_staging_prod(topology_m2, moddn_setup,
     # Try to rename without the appropriate ACI  => INSUFFICIENT_ACCESS
     #
     try:
-        topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-        topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+        topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+        topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
         assert 0
     except AssertionError:
-        topology_m2.ms["master1"].log.info(
+        topology_m2.ms["supplier1"].log.info(
             "Exception (not really expected exception but that is fine as it fails to rename)")
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     # successful MOD with the ACI
-    topology_m2.ms["master1"].log.info("\n\n######## MOVE to and from equality filter ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## MOVE to and from equality filter ########\n")
     _bind_manager(topology_m2)
     _moddn_aci_staging_to_production(topology_m2, mod_type=ldap.MOD_ADD,
                                      target_from=tfrom, target_to=tto)
     _bind_normal(topology_m2)
 
     try:
-        topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-        topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+        topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+        topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         if failure:
             assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
@@ -421,7 +421,7 @@ def test_moddn_staging_prod_9(topology_m2, moddn_setup):
     """Test with nsslapd-moddn-aci set to off so that MODDN requires an 'add' aci.
 
     :id: 222dd7e8-7ff1-40b8-ad26-6f8e42fbfcd9
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -450,38 +450,38 @@ def test_moddn_staging_prod_9(topology_m2, moddn_setup):
         10. It should fail due to INSUFFICIENT_ACCESS
         11. It should pass
     """
-    topology_m2.ms["master1"].log.info("\n\n######## MOVE staging -> Prod (9) ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## MOVE staging -> Prod (9) ########\n")
 
     _bind_normal(topology_m2)
     old_rdn = "uid=%s9" % NEW_ACCOUNT
     old_dn = "%s,%s" % (old_rdn, STAGING_DN)
     new_rdn = old_rdn
     new_superior = PRODUCTION_DN
-    prod = OrganizationalRole(topology_m2.ms["master1"], PRODUCTION_DN)
+    prod = OrganizationalRole(topology_m2.ms["supplier1"], PRODUCTION_DN)
 
     #
     # Try to rename without the appropriate ACI  => INSUFFICIENT_ACCESS
     #
     try:
-        topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-        topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+        topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+        topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
         assert 0
     except AssertionError:
-        topology_m2.ms["master1"].log.info(
+        topology_m2.ms["supplier1"].log.info(
             "Exception (not really expected exception but that is fine as it fails to rename)")
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     #############
     # Now do tests with no support of moddn aci
     #############
-    topology_m2.ms["master1"].log.info("Disable the moddn right")
+    topology_m2.ms["supplier1"].log.info("Disable the moddn right")
     _bind_manager(topology_m2)
-    topology_m2.ms["master1"].config.set(CONFIG_MODDN_ACI_ATTR, 'off')
+    topology_m2.ms["supplier1"].config.set(CONFIG_MODDN_ACI_ATTR, 'off')
 
     # Add the moddn aci that will not be evaluated because of the config flag
-    topology_m2.ms["master1"].log.info("\n\n######## MOVE to and from equality filter ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## MOVE to and from equality filter ########\n")
     _bind_manager(topology_m2)
     _moddn_aci_staging_to_production(topology_m2, mod_type=ldap.MOD_ADD,
                                      target_from=STAGING_DN, target_to=PRODUCTION_DN)
@@ -489,14 +489,14 @@ def test_moddn_staging_prod_9(topology_m2, moddn_setup):
 
     # It will fail because it will test the ADD right
     try:
-        topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-        topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+        topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+        topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
         assert 0
     except AssertionError:
-        topology_m2.ms["master1"].log.info(
+        topology_m2.ms["supplier1"].log.info(
             "Exception (not really expected exception but that is fine as it fails to rename)")
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     # remove the moddn aci
@@ -518,8 +518,8 @@ def test_moddn_staging_prod_9(topology_m2, moddn_setup):
     _write_aci_staging(topology_m2, mod_type=ldap.MOD_ADD)
     _bind_normal(topology_m2)
 
-    topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-    topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+    topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+    topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
 
     _bind_manager(topology_m2)
     prod.remove('aci', ACI_BODY)
@@ -529,11 +529,11 @@ def test_moddn_staging_prod_9(topology_m2, moddn_setup):
     #############
     # Now do tests with support of moddn aci
     #############
-    topology_m2.ms["master1"].log.info("Enable the moddn right")
+    topology_m2.ms["supplier1"].log.info("Enable the moddn right")
     _bind_manager(topology_m2)
-    topology_m2.ms["master1"].config.set(CONFIG_MODDN_ACI_ATTR, 'on')
+    topology_m2.ms["supplier1"].config.set(CONFIG_MODDN_ACI_ATTR, 'on')
 
-    topology_m2.ms["master1"].log.info("\n\n######## MOVE staging -> Prod (10) ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## MOVE staging -> Prod (10) ########\n")
 
     _bind_normal(topology_m2)
     old_rdn = "uid=%s10" % NEW_ACCOUNT
@@ -545,14 +545,14 @@ def test_moddn_staging_prod_9(topology_m2, moddn_setup):
     # Try to rename without the appropriate ACI  => INSUFFICIENT_ACCESS
     #
     try:
-        topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-        topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+        topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+        topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
         assert 0
     except AssertionError:
-        topology_m2.ms["master1"].log.info(
+        topology_m2.ms["supplier1"].log.info(
             "Exception (not really expected exception but that is fine as it fails to rename)")
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     #
@@ -569,14 +569,14 @@ def test_moddn_staging_prod_9(topology_m2, moddn_setup):
     _bind_normal(topology_m2)
 
     try:
-        topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-        topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+        topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+        topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
         assert 0
     except AssertionError:
-        topology_m2.ms["master1"].log.info(
+        topology_m2.ms["supplier1"].log.info(
             "Exception (not really expected exception but that is fine as it fails to rename)")
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     _bind_manager(topology_m2)
@@ -585,14 +585,14 @@ def test_moddn_staging_prod_9(topology_m2, moddn_setup):
     _bind_normal(topology_m2)
 
     # Add the moddn aci that will be evaluated because of the config flag
-    topology_m2.ms["master1"].log.info("\n\n######## MOVE to and from equality filter ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## MOVE to and from equality filter ########\n")
     _bind_manager(topology_m2)
     _moddn_aci_staging_to_production(topology_m2, mod_type=ldap.MOD_ADD,
                                      target_from=STAGING_DN, target_to=PRODUCTION_DN)
     _bind_normal(topology_m2)
 
-    topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-    topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+    topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+    topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
 
     # remove the moddn aci
     _bind_manager(topology_m2)
@@ -606,7 +606,7 @@ def test_moddn_prod_staging(topology_m2, moddn_setup):
        but not move back ACCOUNT11 from prod to staging
 
     :id: 2b061e92-483f-4399-9f56-8d1c1898b043
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -620,7 +620,7 @@ def test_moddn_prod_staging(topology_m2, moddn_setup):
         3. It should fail due to INSUFFICIENT_ACCESS
     """
 
-    topology_m2.ms["master1"].log.info("\n\n######## MOVE staging -> Prod (11) ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## MOVE staging -> Prod (11) ########\n")
 
     _bind_normal(topology_m2)
 
@@ -633,25 +633,25 @@ def test_moddn_prod_staging(topology_m2, moddn_setup):
     # Try to rename without the appropriate ACI  => INSUFFICIENT_ACCESS
     #
     try:
-        topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-        topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+        topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+        topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
         assert 0
     except AssertionError:
-        topology_m2.ms["master1"].log.info(
+        topology_m2.ms["supplier1"].log.info(
             "Exception (not really expected exception but that is fine as it fails to rename)")
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     # successful MOD with the ACI
-    topology_m2.ms["master1"].log.info("\n\n######## MOVE to and from equality filter ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## MOVE to and from equality filter ########\n")
     _bind_manager(topology_m2)
     _moddn_aci_staging_to_production(topology_m2, mod_type=ldap.MOD_ADD,
                                      target_from=STAGING_DN, target_to=PRODUCTION_DN)
     _bind_normal(topology_m2)
 
-    topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-    topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+    topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+    topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
 
     # Now check we can not move back the entry to staging
     old_rdn = "uid=%s11" % NEW_ACCOUNT
@@ -665,14 +665,14 @@ def test_moddn_prod_staging(topology_m2, moddn_setup):
     _bind_normal(topology_m2)
 
     try:
-        topology_m2.ms["master1"].log.info("Try to move back MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-        topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+        topology_m2.ms["supplier1"].log.info("Try to move back MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+        topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
         assert 0
     except AssertionError:
-        topology_m2.ms["master1"].log.info(
+        topology_m2.ms["supplier1"].log.info(
             "Exception (not really expected exception but that is fine as it fails to rename)")
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     _bind_manager(topology_m2)
@@ -690,7 +690,7 @@ def test_check_repl_M2_to_M1(topology_m2, moddn_setup):
     """Checks that replication is still working M2->M1, using ACCOUNT12
 
     :id: 08ac131d-34b7-443f-aacd-23025bbd7de1
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -702,18 +702,18 @@ def test_check_repl_M2_to_M1(topology_m2, moddn_setup):
         2. It should pass
     """
 
-    topology_m2.ms["master1"].log.info("Bind as %s (M2)" % DN_DM)
-    topology_m2.ms["master2"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].log.info("Bind as %s (M2)" % DN_DM)
+    topology_m2.ms["supplier2"].simple_bind_s(DN_DM, PASSWORD)
 
     rdn = "uid=%s12" % NEW_ACCOUNT
     dn = "%s,%s" % (rdn, STAGING_DN)
-    new_account = UserAccount(topology_m2.ms["master2"], dn)
+    new_account = UserAccount(topology_m2.ms["supplier2"], dn)
 
     # First wait for the ACCOUNT19 entry being replicated on M2
     loop = 0
     while loop <= 10:
         try:
-            ent = topology_m2.ms["master2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
+            ent = topology_m2.ms["supplier2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
             break
         except ldap.NO_SUCH_OBJECT:
             time.sleep(1)
@@ -722,12 +722,12 @@ def test_check_repl_M2_to_M1(topology_m2, moddn_setup):
 
     attribute = 'description'
     tested_value = b'Hello world'
-    topology_m2.ms["master1"].log.info("Update (M2) %s (%s)" % (dn, attribute))
+    topology_m2.ms["supplier1"].log.info("Update (M2) %s (%s)" % (dn, attribute))
     new_account.add(attribute, tested_value)
 
     loop = 0
     while loop <= 10:
-        ent = topology_m2.ms["master1"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
+        ent = topology_m2.ms["supplier1"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
         assert ent is not None
         if ent.hasAttr(attribute) and (ent.getValue(attribute) == tested_value):
             break
@@ -735,7 +735,7 @@ def test_check_repl_M2_to_M1(topology_m2, moddn_setup):
         time.sleep(1)
         loop += 1
     assert loop < 10
-    topology_m2.ms["master1"].log.info("Update %s (%s) replicated on M1" % (dn, attribute))
+    topology_m2.ms["supplier1"].log.info("Update %s (%s) replicated on M1" % (dn, attribute))
 
 
 def test_moddn_staging_prod_except(topology_m2, moddn_setup):
@@ -743,7 +743,7 @@ def test_moddn_staging_prod_except(topology_m2, moddn_setup):
        but fails to move entry NEW_ACCOUNT14 from staging to prod_except
 
     :id: 02d34f4c-8574-428d-b43f-31227426392c
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -760,7 +760,7 @@ def test_moddn_staging_prod_except(topology_m2, moddn_setup):
         4. It should pass
     """
 
-    topology_m2.ms["master1"].log.info("\n\n######## MOVE staging -> Prod (13) ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## MOVE staging -> Prod (13) ########\n")
     _bind_normal(topology_m2)
 
     old_rdn = "uid=%s13" % NEW_ACCOUNT
@@ -772,44 +772,44 @@ def test_moddn_staging_prod_except(topology_m2, moddn_setup):
     # Try to rename without the appropriate ACI  => INSUFFICIENT_ACCESS
     #
     try:
-        topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-        topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+        topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+        topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
         assert 0
     except AssertionError:
-        topology_m2.ms["master1"].log.info(
+        topology_m2.ms["supplier1"].log.info(
             "Exception (not really expected exception but that is fine as it fails to rename)")
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     # successful MOD with the ACI
-    topology_m2.ms["master1"].log.info("\n\n######## MOVE to and from equality filter ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## MOVE to and from equality filter ########\n")
     _bind_manager(topology_m2)
     _moddn_aci_staging_to_production(topology_m2, mod_type=ldap.MOD_ADD,
                                      target_from=STAGING_DN, target_to=PRODUCTION_DN)
     _moddn_aci_deny_tree(topology_m2, mod_type=ldap.MOD_ADD)
     _bind_normal(topology_m2)
 
-    topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-    topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+    topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+    topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
 
     #
     # Now try to move an entry under except
     #
-    topology_m2.ms["master1"].log.info("\n\n######## MOVE staging -> Prod/Except (14) ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## MOVE staging -> Prod/Except (14) ########\n")
     old_rdn = "uid=%s14" % NEW_ACCOUNT
     old_dn = "%s,%s" % (old_rdn, STAGING_DN)
     new_rdn = old_rdn
     new_superior = PROD_EXCEPT_DN
     try:
-        topology_m2.ms["master1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
-        topology_m2.ms["master1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
+        topology_m2.ms["supplier1"].log.info("Try to MODDN %s -> %s,%s" % (old_dn, new_rdn, new_superior))
+        topology_m2.ms["supplier1"].rename_s(old_dn, new_rdn, newsuperior=new_superior)
         assert 0
     except AssertionError:
-        topology_m2.ms["master1"].log.info(
+        topology_m2.ms["supplier1"].log.info(
             "Exception (not really expected exception but that is fine as it fails to rename)")
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     # successful MOD with the both ACI
@@ -824,7 +824,7 @@ def test_mode_default_ger_no_moddn(topology_m2, moddn_setup):
     """mode moddn_aci : Check Get Effective Rights Controls for entries
 
     :id: f4785d73-3b14-49c0-b981-d6ff96fa3496
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -836,21 +836,21 @@ def test_mode_default_ger_no_moddn(topology_m2, moddn_setup):
         2. It should pass
     """
 
-    topology_m2.ms["master1"].log.info("\n\n######## mode moddn_aci : GER no moddn  ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## mode moddn_aci : GER no moddn  ########\n")
     request_ctrl = GetEffectiveRightsControl(criticality=True,
                                              authzId=ensure_bytes("dn: " + BIND_DN))
-    msg_id = topology_m2.ms["master1"].search_ext(PRODUCTION_DN,
+    msg_id = topology_m2.ms["supplier1"].search_ext(PRODUCTION_DN,
                                                   ldap.SCOPE_SUBTREE,
                                                   "objectclass=*",
                                                   serverctrls=[request_ctrl])
-    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["master1"].result3(msg_id)
+    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["supplier1"].result3(msg_id)
     # ger={}
     value = ''
     for dn, attrs in rdata:
-        topology_m2.ms["master1"].log.info("dn: %s" % dn)
+        topology_m2.ms["supplier1"].log.info("dn: %s" % dn)
         value = attrs['entryLevelRights'][0]
 
-    topology_m2.ms["master1"].log.info("########  entryLevelRights: %r" % value)
+    topology_m2.ms["supplier1"].log.info("########  entryLevelRights: %r" % value)
     assert b'n' not in value
 
 
@@ -858,7 +858,7 @@ def test_mode_default_ger_with_moddn(topology_m2, moddn_setup):
     """This test case adds the moddn aci and check ger contains 'n'
 
     :id: a752a461-432d-483a-89c0-dfb34045a969
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -874,7 +874,7 @@ def test_mode_default_ger_with_moddn(topology_m2, moddn_setup):
         4. It should pass
     """
 
-    topology_m2.ms["master1"].log.info("\n\n######## mode moddn_aci: GER with moddn ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## mode moddn_aci: GER with moddn ########\n")
 
     # successful MOD with the ACI
     _bind_manager(topology_m2)
@@ -884,18 +884,18 @@ def test_mode_default_ger_with_moddn(topology_m2, moddn_setup):
 
     request_ctrl = GetEffectiveRightsControl(criticality=True,
                                              authzId=ensure_bytes("dn: " + BIND_DN))
-    msg_id = topology_m2.ms["master1"].search_ext(PRODUCTION_DN,
+    msg_id = topology_m2.ms["supplier1"].search_ext(PRODUCTION_DN,
                                                   ldap.SCOPE_SUBTREE,
                                                   "objectclass=*",
                                                   serverctrls=[request_ctrl])
-    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["master1"].result3(msg_id)
+    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["supplier1"].result3(msg_id)
     # ger={}
     value = ''
     for dn, attrs in rdata:
-        topology_m2.ms["master1"].log.info("dn: %s" % dn)
+        topology_m2.ms["supplier1"].log.info("dn: %s" % dn)
         value = attrs['entryLevelRights'][0]
 
-    topology_m2.ms["master1"].log.info("########  entryLevelRights: %r" % value)
+    topology_m2.ms["supplier1"].log.info("########  entryLevelRights: %r" % value)
     assert b'n' in value
 
     # successful MOD with the both ACI
@@ -909,7 +909,7 @@ def test_mode_legacy_ger_no_moddn1(topology_m2, moddn_setup):
     """This test checks mode legacy : GER no moddn
 
     :id: e783e05b-d0d0-4fd4-9572-258a81b7bd24
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -925,24 +925,24 @@ def test_mode_legacy_ger_no_moddn1(topology_m2, moddn_setup):
         4. It should pass
     """
 
-    topology_m2.ms["master1"].log.info("\n\n######## Disable the moddn aci mod ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## Disable the moddn aci mod ########\n")
     _bind_manager(topology_m2)
-    topology_m2.ms["master1"].config.set(CONFIG_MODDN_ACI_ATTR, 'off')
+    topology_m2.ms["supplier1"].config.set(CONFIG_MODDN_ACI_ATTR, 'off')
 
-    topology_m2.ms["master1"].log.info("\n\n######## mode legacy 1: GER no moddn  ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## mode legacy 1: GER no moddn  ########\n")
     request_ctrl = GetEffectiveRightsControl(criticality=True, authzId=ensure_bytes("dn: " + BIND_DN))
-    msg_id = topology_m2.ms["master1"].search_ext(PRODUCTION_DN,
+    msg_id = topology_m2.ms["supplier1"].search_ext(PRODUCTION_DN,
                                                   ldap.SCOPE_SUBTREE,
                                                   "objectclass=*",
                                                   serverctrls=[request_ctrl])
-    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["master1"].result3(msg_id)
+    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["supplier1"].result3(msg_id)
     # ger={}
     value = ''
     for dn, attrs in rdata:
-        topology_m2.ms["master1"].log.info("dn: %s" % dn)
+        topology_m2.ms["supplier1"].log.info("dn: %s" % dn)
         value = attrs['entryLevelRights'][0]
 
-    topology_m2.ms["master1"].log.info("########  entryLevelRights: %r" % value)
+    topology_m2.ms["supplier1"].log.info("########  entryLevelRights: %r" % value)
     assert b'n' not in value
 
 
@@ -950,7 +950,7 @@ def test_mode_legacy_ger_no_moddn2(topology_m2, moddn_setup):
     """This test checks mode legacy : GER no moddn
 
     :id: af87e024-1744-4f1d-a2d3-ea2687e2351d
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -968,11 +968,11 @@ def test_mode_legacy_ger_no_moddn2(topology_m2, moddn_setup):
         5. It should pass
     """
 
-    topology_m2.ms["master1"].log.info("\n\n######## Disable the moddn aci mod ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## Disable the moddn aci mod ########\n")
     _bind_manager(topology_m2)
-    topology_m2.ms["master1"].config.set(CONFIG_MODDN_ACI_ATTR, 'off')
+    topology_m2.ms["supplier1"].config.set(CONFIG_MODDN_ACI_ATTR, 'off')
 
-    topology_m2.ms["master1"].log.info("\n\n######## mode legacy 2: GER no moddn  ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## mode legacy 2: GER no moddn  ########\n")
     # successful MOD with the ACI
     _bind_manager(topology_m2)
     _moddn_aci_staging_to_production(topology_m2, mod_type=ldap.MOD_ADD,
@@ -981,18 +981,18 @@ def test_mode_legacy_ger_no_moddn2(topology_m2, moddn_setup):
 
     request_ctrl = GetEffectiveRightsControl(criticality=True,
                                              authzId=ensure_bytes("dn: " + BIND_DN))
-    msg_id = topology_m2.ms["master1"].search_ext(PRODUCTION_DN,
+    msg_id = topology_m2.ms["supplier1"].search_ext(PRODUCTION_DN,
                                                   ldap.SCOPE_SUBTREE,
                                                   "objectclass=*",
                                                   serverctrls=[request_ctrl])
-    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["master1"].result3(msg_id)
+    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["supplier1"].result3(msg_id)
     # ger={}
     value = ''
     for dn, attrs in rdata:
-        topology_m2.ms["master1"].log.info("dn: %s" % dn)
+        topology_m2.ms["supplier1"].log.info("dn: %s" % dn)
         value = attrs['entryLevelRights'][0]
 
-    topology_m2.ms["master1"].log.info("########  entryLevelRights: %r" % value)
+    topology_m2.ms["supplier1"].log.info("########  entryLevelRights: %r" % value)
     assert b'n' not in value
 
     # successful MOD with the both ACI
@@ -1006,7 +1006,7 @@ def test_mode_legacy_ger_with_moddn(topology_m2, moddn_setup):
     """This test checks mode legacy : GER with moddn
 
     :id: 37c1e537-1b5d-4fab-b62a-50cd8c5b3493
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             M1 - staging DIT
             M2 - production DIT
             add test accounts in staging DIT
@@ -1026,13 +1026,13 @@ def test_mode_legacy_ger_with_moddn(topology_m2, moddn_setup):
         6. It should pass
     """
 
-    suffix = Domain(topology_m2.ms["master1"], SUFFIX)
+    suffix = Domain(topology_m2.ms["supplier1"], SUFFIX)
 
-    topology_m2.ms["master1"].log.info("\n\n######## Disable the moddn aci mod ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## Disable the moddn aci mod ########\n")
     _bind_manager(topology_m2)
-    topology_m2.ms["master1"].config.set(CONFIG_MODDN_ACI_ATTR, 'off')
+    topology_m2.ms["supplier1"].config.set(CONFIG_MODDN_ACI_ATTR, 'off')
 
-    topology_m2.ms["master1"].log.info("\n\n######## mode legacy : GER with moddn  ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## mode legacy : GER with moddn  ########\n")
 
     # being allowed to read/write the RDN attribute use to allow the RDN
     ACI_TARGET = "(target = \"ldap:///%s\")(targetattr=\"uid\")" % (PRODUCTION_DN)
@@ -1046,18 +1046,18 @@ def test_mode_legacy_ger_with_moddn(topology_m2, moddn_setup):
     _bind_normal(topology_m2)
 
     request_ctrl = GetEffectiveRightsControl(criticality=True, authzId=ensure_bytes("dn: " + BIND_DN))
-    msg_id = topology_m2.ms["master1"].search_ext(PRODUCTION_DN,
+    msg_id = topology_m2.ms["supplier1"].search_ext(PRODUCTION_DN,
                                                   ldap.SCOPE_SUBTREE,
                                                   "objectclass=*",
                                                   serverctrls=[request_ctrl])
-    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["master1"].result3(msg_id)
+    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["supplier1"].result3(msg_id)
     # ger={}
     value = ''
     for dn, attrs in rdata:
-        topology_m2.ms["master1"].log.info("dn: %s" % dn)
+        topology_m2.ms["supplier1"].log.info("dn: %s" % dn)
         value = attrs['entryLevelRights'][0]
 
-    topology_m2.ms["master1"].log.info("########  entryLevelRights: %r" % value)
+    topology_m2.ms["supplier1"].log.info("########  entryLevelRights: %r" % value)
     assert b'n' in value
 
     # successful MOD with the both ACI
@@ -1068,8 +1068,8 @@ def test_mode_legacy_ger_with_moddn(topology_m2, moddn_setup):
 
 @pytest.fixture(scope="module")
 def rdn_write_setup(topology_m2):
-    topology_m2.ms["master1"].log.info("\n\n######## Add entry tuser ########\n")
-    user = UserAccount(topology_m2.ms["master1"], SRC_ENTRY_DN)
+    topology_m2.ms["supplier1"].log.info("\n\n######## Add entry tuser ########\n")
+    user = UserAccount(topology_m2.ms["supplier1"], SRC_ENTRY_DN)
     user_props = TEST_USER_PROPERTIES.copy()
     user_props.update({'sn': SRC_ENTRY_CN,
                        'cn': SRC_ENTRY_CN,
@@ -1081,7 +1081,7 @@ def test_rdn_write_get_ger(topology_m2, rdn_write_setup):
     """This test checks GER rights for anonymous
 
     :id: d5d85f87-b53d-4f50-8fa6-a9e55c75419b
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             Add entry tuser
     :steps:
         1. Search for GER controls on M1
@@ -1094,19 +1094,19 @@ def test_rdn_write_get_ger(topology_m2, rdn_write_setup):
     """
 
     ANONYMOUS_DN = ""
-    topology_m2.ms["master1"].log.info("\n\n######## GER rights for anonymous ########\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######## GER rights for anonymous ########\n")
     request_ctrl = GetEffectiveRightsControl(criticality=True,
                                              authzId=ensure_bytes("dn:" + ANONYMOUS_DN))
-    msg_id = topology_m2.ms["master1"].search_ext(SUFFIX,
+    msg_id = topology_m2.ms["supplier1"].search_ext(SUFFIX,
                                                   ldap.SCOPE_SUBTREE,
                                                   "objectclass=*",
                                                   serverctrls=[request_ctrl])
-    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["master1"].result3(msg_id)
+    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["supplier1"].result3(msg_id)
     value = ''
     for dn, attrs in rdata:
-        topology_m2.ms["master1"].log.info("dn: %s" % dn)
+        topology_m2.ms["supplier1"].log.info("dn: %s" % dn)
         for value in attrs['entryLevelRights']:
-            topology_m2.ms["master1"].log.info("########  entryLevelRights: %r" % value)
+            topology_m2.ms["supplier1"].log.info("########  entryLevelRights: %r" % value)
             assert b'n' not in value
 
 
@@ -1114,7 +1114,7 @@ def test_rdn_write_modrdn_anonymous(topology_m2, rdn_write_setup):
     """Tests anonymous user for modrdn
 
     :id: fc07be23-3341-44ab-a53c-c68c5f9569c7
-    :setup: MMR with two masters,
+    :setup: MMR with two suppliers,
             Add entry tuser
     :steps:
         1. Bind as anonymous user
@@ -1127,27 +1127,27 @@ def test_rdn_write_modrdn_anonymous(topology_m2, rdn_write_setup):
     """
 
     ANONYMOUS_DN = ""
-    topology_m2.ms["master1"].close()
-    topology_m2.ms["master1"].binddn = ANONYMOUS_DN
-    topology_m2.ms["master1"].open()
-    msg_id = topology_m2.ms["master1"].search_ext("", ldap.SCOPE_BASE, "objectclass=*")
-    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["master1"].result3(msg_id)
+    topology_m2.ms["supplier1"].close()
+    topology_m2.ms["supplier1"].binddn = ANONYMOUS_DN
+    topology_m2.ms["supplier1"].open()
+    msg_id = topology_m2.ms["supplier1"].search_ext("", ldap.SCOPE_BASE, "objectclass=*")
+    rtype, rdata, rmsgid, response_ctrl = topology_m2.ms["supplier1"].result3(msg_id)
     for dn, attrs in rdata:
-        topology_m2.ms["master1"].log.info("dn: %s" % dn)
+        topology_m2.ms["supplier1"].log.info("dn: %s" % dn)
         for attr in attrs:
-            topology_m2.ms["master1"].log.info("########  %r: %r" % (attr, attrs[attr]))
+            topology_m2.ms["supplier1"].log.info("########  %r: %r" % (attr, attrs[attr]))
 
     try:
-        topology_m2.ms["master1"].rename_s(SRC_ENTRY_DN, "cn=%s" % DST_ENTRY_CN, delold=True)
+        topology_m2.ms["supplier1"].rename_s(SRC_ENTRY_DN, "cn=%s" % DST_ENTRY_CN, delold=True)
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     try:
-        topology_m2.ms["master1"].getEntry(DST_ENTRY_DN, ldap.SCOPE_BASE, "objectclass=*")
+        topology_m2.ms["supplier1"].getEntry(DST_ENTRY_DN, ldap.SCOPE_BASE, "objectclass=*")
         assert False
     except Exception as e:
-        topology_m2.ms["master1"].log.info("The entry was not renamed (expected)")
+        topology_m2.ms["supplier1"].log.info("The entry was not renamed (expected)")
         isinstance(e, ldap.NO_SUCH_OBJECT)
 
     _bind_manager(topology_m2)

+ 101 - 101
dirsrvtests/tests/suites/automember_plugin/basic_test.py

@@ -51,12 +51,12 @@ def add_base_entries(topo):
     for suffix, backend_name in [(BASE_SUFF, 'AutoMembers'), (SUBSUFFIX, 'SubAutoMembers'),
                                  (TEST_BASE, 'testAutoMembers'), (BASE_REPL, 'ReplAutoMembers'),
                                  ("dc=SubSuffix,{}".format(BASE_REPL), 'ReplSubAutoMembers')]:
-        Backends(topo.ms["master1"]).create(properties={
+        Backends(topo.ms["supplier1"]).create(properties={
             'cn': backend_name,
             'nsslapd-suffix': suffix,
             'nsslapd-CACHE_SIZE': CACHE_SIZE,
             'nsslapd-CACHEMEM_SIZE': CACHEMEM_SIZE})
-        Domain(topo.ms["master1"], suffix).create(properties={
+        Domain(topo.ms["supplier1"], suffix).create(properties={
             'dc': suffix.split('=')[1].split(',')[0],
             'aci': [
                 f'(targetattr="userPassword")(version 3.0;aci  "Replication Manager '
@@ -72,7 +72,7 @@ def add_base_entries(topo):
                           (BASE_SUFF, 'Employees'),
                           (BASE_SUFF, 'TaskEmployees'),
                           (TEST_BASE, 'Employees')]:
-        OrganizationalUnits(topo.ms["master1"], suffix).create(properties={'ou': ou_cn})
+        OrganizationalUnits(topo.ms["supplier1"], suffix).create(properties={'ou': ou_cn})
 
 
 def add_user(topo, user_id, suffix, uid_no, gid_no, role_usr):
@@ -84,7 +84,7 @@ def add_user(topo, user_id, suffix, uid_no, gid_no, role_usr):
     if ds_is_older('1.4.0'):
         objectclasses.remove('nsAccount')
 
-    user = nsAdminGroups(topo.ms["master1"], suffix, rdn=None).create(properties={
+    user = nsAdminGroups(topo.ms["supplier1"], suffix, rdn=None).create(properties={
         'cn': user_id,
         'sn': user_id,
         'uid': user_id,
@@ -104,14 +104,14 @@ def check_groups(topo, group_dn, user_dn, member):
     """
     Will check MEMBATTR
     """
-    return bool(Group(topo.ms["master1"], group_dn).present(member, user_dn))
+    return bool(Group(topo.ms["supplier1"], group_dn).present(member, user_dn))
 
 
 def add_group(topo, suffix, group_id):
     """
     Will create groups
     """
-    Groups(topo.ms["master1"], suffix, rdn=None).create(properties={
+    Groups(topo.ms["supplier1"], suffix, rdn=None).create(properties={
         'cn': group_id
     })
 
@@ -120,7 +120,7 @@ def number_memberof(topo, user, number):
     """
     Function to check if the memberOf attribute is present.
     """
-    return len(nsAdminGroup(topo.ms["master1"], user).get_attr_vals_utf8('memberOf')) == number
+    return len(nsAdminGroup(topo.ms["supplier1"], user).get_attr_vals_utf8('memberOf')) == number
 
 
 def add_group_entries(topo):
@@ -159,7 +159,7 @@ def add_group_entries(topo):
                                      'Managers', '666'),
                                     ('cn=subsuffGroups,{}'.format(SUBSUFFIX),
                                      'Contractors', '999')]:
-        PosixGroups(topo.ms["master1"], ou_ou, rdn=None).create(properties={
+        PosixGroups(topo.ms["supplier1"], ou_ou, rdn=None).create(properties={
             'cn': group_cn,
             'gidNumber': grp_no
         })
@@ -169,7 +169,7 @@ def add_member_attr(topo, group_dn, user_dn, member):
     """
     Will add members to groups
     """
-    Group(topo.ms["master1"], group_dn).add(member, user_dn)
+    Group(topo.ms["supplier1"], group_dn).add(member, user_dn)
 
 
 def change_grp_objclass(new_object, member, type_of):
@@ -193,10 +193,10 @@ def _create_all_entries(topo):
     """
     add_base_entries(topo)
     add_group_entries(topo)
-    auto = AutoMembershipPlugin(topo.ms["master1"])
+    auto = AutoMembershipPlugin(topo.ms["supplier1"])
     auto.add("nsslapd-pluginConfigArea", "cn=autoMembersPlugin,{}".format(BASE_REPL))
-    MemberOfPlugin(topo.ms["master1"]).enable()
-    automembers_definitions = AutoMembershipDefinitions(topo.ms["master1"])
+    MemberOfPlugin(topo.ms["supplier1"]).enable()
+    automembers_definitions = AutoMembershipDefinitions(topo.ms["supplier1"])
     automembers_definitions.create(properties={
         'cn': 'userGroups',
         'autoMemberScope': f'ou=Employees,{BASE_SUFF}',
@@ -225,7 +225,7 @@ def _create_all_entries(topo):
         'autoMemberGroupingAttr': 'memberuid:dn',
     })
 
-    automembers_regex_usergroup = AutoMembershipRegexRules(topo.ms["master1"],
+    automembers_regex_usergroup = AutoMembershipRegexRules(topo.ms["supplier1"],
                                                            f'cn=userGroups,{auto.dn}')
     automembers_regex_usergroup.create(properties={
         'cn': 'Managers',
@@ -255,7 +255,7 @@ def _create_all_entries(topo):
         ],
     })
 
-    automembers_regex_sub = AutoMembershipRegexRules(topo.ms["master1"],
+    automembers_regex_sub = AutoMembershipRegexRules(topo.ms["supplier1"],
                                                      f'cn=subsuffGroups,{auto.dn}')
     automembers_regex_sub.create(properties={
         'cn': 'Managers',
@@ -303,7 +303,7 @@ def _create_all_entries(topo):
             'autoMemberGroupingAttr': 'member:dn',
         })
 
-    topo.ms["master1"].restart()
+    topo.ms["supplier1"].restart()
 
 
 def test_disable_the_plug_in(topo, _create_all_entries):
@@ -318,7 +318,7 @@ def test_disable_the_plug_in(topo, _create_all_entries):
         1. Should success
         2. Should success
     """
-    instance_auto = AutoMembershipPlugin(topo.ms["master1"])
+    instance_auto = AutoMembershipPlugin(topo.ms["supplier1"])
     instance_auto.disable()
     assert not instance_auto.status()
     instance_auto.enable()
@@ -337,7 +337,7 @@ def test_custom_config_area(topo, _create_all_entries):
         1. Should success
         2. Should success
     """
-    instance_auto = AutoMembershipPlugin(topo.ms["master1"])
+    instance_auto = AutoMembershipPlugin(topo.ms["supplier1"])
     instance_auto.replace("nsslapd-pluginConfigArea", DEFAULT_SUFFIX)
     assert instance_auto.get_attr_val_utf8("nsslapd-pluginConfigArea")
     instance_auto.remove("nsslapd-pluginConfigArea", DEFAULT_SUFFIX)
@@ -368,7 +368,7 @@ def test_ability_to_control_behavior_of_modifiers_name(topo, _create_all_entries
         6. Should success
         7. Should success
     """
-    instance1 = topo.ms["master1"]
+    instance1 = topo.ms["supplier1"]
     configure = Config(instance1)
     configure.replace('nsslapd-plugin-binddn-tracking', 'on')
     instance1.restart()
@@ -523,8 +523,8 @@ def test_multi_valued_automemberdefaultgroup_with_uniquemember(topo, _create_all
         5. Should success
     """
     test_id = "autoMembers_09"
-    instance = topo.ms["master1"]
-    auto = AutoMembershipPlugin(topo.ms["master1"])
+    instance = topo.ms["supplier1"]
+    auto = AutoMembershipPlugin(topo.ms["supplier1"])
     # Modify automember config entry to use uniquemember: cn=testuserGroups,PLUGIN_AUTO
     AutoMembershipDefinition(
         instance, "cn=testuserGroups,{}".format(auto.dn)).replace('autoMemberGroupingAttr',
@@ -536,12 +536,12 @@ def test_multi_valued_automemberdefaultgroup_with_uniquemember(topo, _create_all
     default_group4 = "cn=TestDef4,CN=testuserGroups,{}".format(TEST_BASE)
     default_group5 = "cn=TestDef5,CN=testuserGroups,{}".format(TEST_BASE)
     for grp in (default_group1, default_group2, default_group3, default_group4, default_group5):
-        instance_of_group = Group(topo.ms["master1"], grp)
+        instance_of_group = Group(topo.ms["supplier1"], grp)
         change_grp_objclass("groupOfUniqueNames", "member", instance_of_group)
     # Add user: uid=User_{test_id}, AutoMemScope
     user = add_user(topo, "User_{}".format(test_id), AUTO_MEM_SCOPE_TEST, "19", "14", "New")
     # Checking groups...
-    assert user.dn.lower() in UniqueGroup(topo.ms["master1"],
+    assert user.dn.lower() in UniqueGroup(topo.ms["supplier1"],
                                           default_group1).get_attr_val_utf8("uniqueMember")
     # Delete user uid=User_{test_id},AutoMemScope
     user.delete()
@@ -550,9 +550,9 @@ def test_multi_valued_automemberdefaultgroup_with_uniquemember(topo, _create_all
         instance, "cn=testuserGroups,{}".format(auto.dn)).replace('autoMemberGroupingAttr',
                                                                   "member: dn")
     for grp in [default_group1, default_group2, default_group3, default_group4, default_group5]:
-        instance_of_group = UniqueGroup(topo.ms["master1"], grp)
+        instance_of_group = UniqueGroup(topo.ms["supplier1"], grp)
         change_grp_objclass("groupOfNames", "uniquemember", instance_of_group)
-    topo.ms["master1"].restart()
+    topo.ms["supplier1"].restart()
 
 
 def test_invalid_automembergroupingattr_member(topo, _create_all_entries):
@@ -575,7 +575,7 @@ def test_invalid_automembergroupingattr_member(topo, _create_all_entries):
     """
     test_id = "autoMembers_10"
     default_group = "cn=TestDef1,CN=testuserGroups,{}".format(TEST_BASE)
-    instance_of_group = Group(topo.ms["master1"], default_group)
+    instance_of_group = Group(topo.ms["supplier1"], default_group)
     change_grp_objclass("groupOfUniqueNames", "member", instance_of_group)
     with pytest.raises(ldap.UNWILLING_TO_PERFORM):
         add_user(topo, "User_{}".format(test_id), AUTO_MEM_SCOPE_TEST, "19", "20", "Invalid")
@@ -611,7 +611,7 @@ def test_valid_and_invalid_automembergroupingattr(topo, _create_all_entries):
     default_group_5 = "cn=TestDef5,CN=testuserGroups,{}".format(TEST_BASE)
     grp_4_5 = [default_group_4, default_group_5]
     for grp in grp_4_5:
-        instance_of_group = Group(topo.ms["master1"], grp)
+        instance_of_group = Group(topo.ms["supplier1"], grp)
         change_grp_objclass("groupOfUniqueNames", "member", instance_of_group)
     with pytest.raises(ldap.UNWILLING_TO_PERFORM):
         add_user(topo, "User_{}".format(test_id), AUTO_MEM_SCOPE_TEST, "19", "24", "MixUsers")
@@ -623,7 +623,7 @@ def test_valid_and_invalid_automembergroupingattr(topo, _create_all_entries):
             assert check_groups(topo, grp, "cn=User_{},{}".format(test_id,
                                                                   AUTO_MEM_SCOPE_TEST), "member")
     for grp in grp_4_5:
-        instance_of_group = Group(topo.ms["master1"], grp)
+        instance_of_group = Group(topo.ms["supplier1"], grp)
         change_grp_objclass("groupOfNames", "uniquemember", instance_of_group)
 
 
@@ -812,11 +812,11 @@ def test_reject_invalid_config_and_we_donot_deadlock_the_server(topo, _create_al
         2. Should success
     """
     # Changing config area to dc=automembers,dc=com
-    instance = AutoMembershipPlugin(topo.ms["master1"])
+    instance = AutoMembershipPlugin(topo.ms["supplier1"])
     instance.replace("nsslapd-pluginConfigArea", BASE_SUFF)
-    topo.ms["master1"] .restart()
+    topo.ms["supplier1"] .restart()
     # Attempting to add invalid config...
-    automembers = AutoMembershipDefinitions(topo.ms["master1"], BASE_SUFF)
+    automembers = AutoMembershipDefinitions(topo.ms["supplier1"], BASE_SUFF)
     with pytest.raises(ldap.UNWILLING_TO_PERFORM):
         automembers.create(properties={
             'cn': 'userGroups',
@@ -826,7 +826,7 @@ def test_reject_invalid_config_and_we_donot_deadlock_the_server(topo, _create_al
             "autoMemberGroupingAttr": "member: dn"
         })
     # Verify server is still working
-    automembers = AutoMembershipRegexRules(topo.ms["master1"],
+    automembers = AutoMembershipRegexRules(topo.ms["supplier1"],
                                            f'cn=userGroups,cn=Auto Membership Plugin,'
                                            f'cn=plugins,cn=config')
     with pytest.raises(ldap.ALREADY_EXISTS):
@@ -842,10 +842,10 @@ def test_reject_invalid_config_and_we_donot_deadlock_the_server(topo, _create_al
 
     # Adding first user...
     for uid in range(300, 302):
-        UserAccounts(topo.ms["master1"], BASE_SUFF, rdn=None).create_test_user(uid=uid, gid=uid)
+        UserAccounts(topo.ms["supplier1"], BASE_SUFF, rdn=None).create_test_user(uid=uid, gid=uid)
     # Adding this line code to remove the automembers plugin configuration.
     instance.remove("nsslapd-pluginConfigArea", BASE_SUFF)
-    topo.ms["master1"] .restart()
+    topo.ms["supplier1"] .restart()
 
 
 @pytest.fixture(scope="module")
@@ -858,18 +858,18 @@ def _startuptask(topo):
                     "cn=testuserGroups",
                     "cn=subsuffGroups",
                     "cn=hostGroups"]:
-        AutoMembershipDefinition(topo.ms["master1"], f'{Configs},{PLUGIN_AUTO}').delete()
-    AutoMembershipDefinition(topo.ms["master1"], "cn=userGroups,{}".format(PLUGIN_AUTO)).replace(
+        AutoMembershipDefinition(topo.ms["supplier1"], f'{Configs},{PLUGIN_AUTO}').delete()
+    AutoMembershipDefinition(topo.ms["supplier1"], "cn=userGroups,{}".format(PLUGIN_AUTO)).replace(
         'autoMemberScope', 'ou=TaskEmployees,dc=autoMembers,dc=com')
-    topo.ms['master1'].restart()
+    topo.ms['supplier1'].restart()
 
 
 @pytest.fixture(scope="function")
 def _fixture_for_build_task(request, topo):
     def finof():
-        master = topo.ms['master1']
+        supplier = topo.ms['supplier1']
         auto_mem_scope = "ou=TaskEmployees,{}".format(BASE_SUFF)
-        for user in nsAdminGroups(master, auto_mem_scope, rdn=None).list():
+        for user in nsAdminGroups(supplier, auto_mem_scope, rdn=None).list():
             user.delete()
 
     request.addfinalizer(finof)
@@ -892,32 +892,32 @@ def test_automemtask_re_build_task(topo, _create_all_entries, _startuptask, _fix
         2. Success
         3. Success
     """
-    master = topo.ms['master1']
+    supplier = topo.ms['supplier1']
     testid = "autoMemTask_01"
     auto_mem_scope = "ou=TaskEmployees,{}".format(BASE_SUFF)
     managers_grp = "cn=Managers,ou=userGroups,{}".format(BASE_SUFF)
     contract_grp = "cn=Contractors,ou=userGroups,{}".format(BASE_SUFF)
     user_rdn = "User_{}".format(testid)
     # make sure the retro changelog is disabled
-    RetroChangelogPlugin(master).disable()
-    AutoMembershipPlugin(master).disable()
-    master.restart()
+    RetroChangelogPlugin(supplier).disable()
+    AutoMembershipPlugin(supplier).disable()
+    supplier.restart()
     for i in range(10):
         add_user(topo, "{}{}".format(user_rdn, str(i)), auto_mem_scope, str(1188), str(1189), "Manager")
     for grp in (managers_grp, contract_grp):
         with pytest.raises(AssertionError):
             assert check_groups(topo, grp, f'uid=User_autoMemTask_010,{auto_mem_scope}', 'member')
-    AutoMembershipPlugin(master).enable()
-    master.restart()
+    AutoMembershipPlugin(supplier).enable()
+    supplier.restart()
     error_string = "automember_rebuild_task_thread"
-    AutomemberRebuildMembershipTask(master).create(properties={
+    AutomemberRebuildMembershipTask(supplier).create(properties={
         'basedn': auto_mem_scope,
         'filter': "objectClass=posixAccount"
     })
     # Search for any error logs
-    assert not master.searchErrorsLog(error_string)
+    assert not supplier.searchErrorsLog(error_string)
     for grp in (managers_grp, contract_grp):
-        bulk_check_groups(master, grp, "member", 10)
+        bulk_check_groups(supplier, grp, "member", 10)
 
 
 def ldif_check_groups(USERS_DN, MEMBATTR, TOTAL_MEM, LDIF_FILE):
@@ -954,25 +954,25 @@ def test_automemtask_export_task(topo, _create_all_entries, _startuptask, _fixtu
         1. Success
         2. Success
     """
-    master = topo.ms['master1']
-    p = Paths('master1')
+    supplier = topo.ms['supplier1']
+    p = Paths('supplier1')
     testid = "autoMemTask_02"
     auto_mem_scope = "ou=TaskEmployees,{}".format(BASE_SUFF)
     managers_grp = "cn=Managers,ou=userGroups,{}".format(BASE_SUFF)
     user_rdn = "User_{}".format(testid)
     # Disabling plugin
-    AutoMembershipPlugin(master).disable()
-    master.restart()
+    AutoMembershipPlugin(supplier).disable()
+    supplier.restart()
     for i in range(10):
         add_user(topo, "{}{}".format(user_rdn, str(i)), auto_mem_scope, str(2788), str(2789), "Manager")
     with pytest.raises(AssertionError):
-        bulk_check_groups(master, managers_grp, "member", 10)
-    AutoMembershipPlugin(master).enable()
-    master.restart()
+        bulk_check_groups(supplier, managers_grp, "member", 10)
+    AutoMembershipPlugin(supplier).enable()
+    supplier.restart()
     export_ldif = p.backup_dir + "/Out_Export_02.ldif"
     if os.path.exists(export_ldif):
         os.remove(export_ldif)
-    exp_task = Tasks(master)
+    exp_task = Tasks(supplier)
     exp_task.automemberExport(suffix=auto_mem_scope, fstr='objectclass=posixAccount', ldif_out=export_ldif)
     check_file_exists(export_ldif)
     ldif_check_groups("cn={}".format(user_rdn), "member", 10, export_ldif)
@@ -990,8 +990,8 @@ def test_automemtask_mapping(topo, _create_all_entries, _startuptask, _fixture_f
         1. Should success
         2. Should success
     """
-    master = topo.ms['master1']
-    p = Paths('master1')
+    supplier = topo.ms['supplier1']
+    p = Paths('supplier1')
     testid = "autoMemTask_02"
     auto_mem_scope = "ou=TaskEmployees,{}".format(BASE_SUFF)
     user_rdn = "User_{}".format(testid)
@@ -1002,9 +1002,9 @@ def test_automemtask_mapping(topo, _create_all_entries, _startuptask, _fixture_f
             os.remove(file)
     for i in range(10):
         add_user(topo, "{}{}".format(user_rdn, str(i)), auto_mem_scope, str(2788), str(2789), "Manager")
-    ExportTask(master).export_suffix_to_ldif(ldiffile=export_ldif, suffix=BASE_SUFF)
+    ExportTask(supplier).export_suffix_to_ldif(ldiffile=export_ldif, suffix=BASE_SUFF)
     check_file_exists(export_ldif)
-    map_task = Tasks(master)
+    map_task = Tasks(supplier)
     map_task.automemberMap(ldif_in=export_ldif, ldif_out=output_ldif3)
     check_file_exists(output_ldif3)
     ldif_check_groups("cn={}".format(user_rdn), "member", 10, output_ldif3)
@@ -1023,27 +1023,27 @@ def test_automemtask_re_build(topo, _create_all_entries, _startuptask, _fixture_
         1. Should success
         2. Should not success
     """
-    master = topo.ms['master1']
+    supplier = topo.ms['supplier1']
     testid = "autoMemTask_04"
     auto_mem_scope = "ou=TaskEmployees,{}".format(BASE_SUFF)
     managers_grp = "cn=Managers,ou=userGroups,{}".format(BASE_SUFF)
     user_rdn = "User_{}".format(testid)
     # Disabling plugin
-    AutoMembershipPlugin(master).disable()
-    master.restart()
+    AutoMembershipPlugin(supplier).disable()
+    supplier.restart()
     for number in range(10):
         add_user(topo, f'{user_rdn}{number}', auto_mem_scope, str(number), str(number), "Manager")
     with pytest.raises(AssertionError):
-        bulk_check_groups(master, managers_grp, "member", 10)
+        bulk_check_groups(supplier, managers_grp, "member", 10)
     # Enabling plugin
-    AutoMembershipPlugin(master).enable()
-    master.restart()
-    AutomemberRebuildMembershipTask(master).create(properties={
+    AutoMembershipPlugin(supplier).enable()
+    supplier.restart()
+    AutomemberRebuildMembershipTask(supplier).create(properties={
         'basedn': auto_mem_scope,
         'filter': "objectClass=inetOrgPerson"
     })
     with pytest.raises(AssertionError):
-        bulk_check_groups(master, managers_grp, "member", 10)
+        bulk_check_groups(supplier, managers_grp, "member", 10)
 
 
 def test_automemtask_export(topo, _create_all_entries, _startuptask, _fixture_for_build_task):
@@ -1057,26 +1057,26 @@ def test_automemtask_export(topo, _create_all_entries, _startuptask, _fixture_fo
         1. Should success
         2. Should not success
     """
-    master = topo.ms['master1']
-    p = Paths('master1')
+    supplier = topo.ms['supplier1']
+    p = Paths('supplier1')
     testid = "autoMemTask_05"
     auto_mem_scope = "ou=TaskEmployees,{}".format(BASE_SUFF)
     managers_grp = "cn=Managers,ou=userGroups,{}".format(BASE_SUFF)
     user_rdn = "User_{}".format(testid)
     # Disabling plugin
-    AutoMembershipPlugin(master).disable()
-    master.restart()
+    AutoMembershipPlugin(supplier).disable()
+    supplier.restart()
     for number in range(10):
         add_user(topo, f'{user_rdn}{number}', auto_mem_scope, str(number), str(number), "Manager")
     with pytest.raises(AssertionError):
-        bulk_check_groups(master, managers_grp, "member", 10)
+        bulk_check_groups(supplier, managers_grp, "member", 10)
     # Enabling plugin
-    AutoMembershipPlugin(master).enable()
-    master.restart()
+    AutoMembershipPlugin(supplier).enable()
+    supplier.restart()
     export_ldif = p.backup_dir + "/Out_Export_02.ldif"
     if os.path.exists(export_ldif):
         os.remove(export_ldif)
-    exp_task = Tasks(master)
+    exp_task = Tasks(supplier)
     exp_task.automemberExport(suffix=auto_mem_scope, fstr='objectclass=inetOrgPerson', ldif_out=export_ldif)
     check_file_exists(export_ldif)
     with pytest.raises(AssertionError):
@@ -1097,36 +1097,36 @@ def test_automemtask_run_re_build(topo, _create_all_entries, _startuptask, _fixt
         2. Should success
         3. Should success
     """
-    master = topo.ms['master1']
-    p = Paths('master1')
+    supplier = topo.ms['supplier1']
+    p = Paths('supplier1')
     testid = "autoMemTask_06"
     auto_mem_scope = "ou=TaskEmployees,{}".format(BASE_SUFF)
     managers_grp = "cn=Managers,ou=userGroups,{}".format(BASE_SUFF)
     user_rdn = "User_{}".format(testid)
     # Disabling plugin
-    AutoMembershipPlugin(master).disable()
-    master.restart()
+    AutoMembershipPlugin(supplier).disable()
+    supplier.restart()
     for number in range(10):
         add_user(topo, f'{user_rdn}{number}', auto_mem_scope, '111', '111', "Manager")
-    for user in nsAdminGroups(master, auto_mem_scope, rdn=None).list():
+    for user in nsAdminGroups(supplier, auto_mem_scope, rdn=None).list():
         user.add('objectclass', 'inetOrgPerson')
-    AutoMembershipDefinition(master,
+    AutoMembershipDefinition(supplier,
                              f'cn=userGroups,{PLUGIN_AUTO}').replace('autoMemberFilter',
                                                                      "objectclass=inetOrgPerson")
-    master.restart()
+    supplier.restart()
     with pytest.raises(AssertionError):
-        bulk_check_groups(master, managers_grp, "member", 10)
-    AutoMembershipPlugin(master).enable()
-    master.restart()
-    AutomemberRebuildMembershipTask(master).create(properties={
+        bulk_check_groups(supplier, managers_grp, "member", 10)
+    AutoMembershipPlugin(supplier).enable()
+    supplier.restart()
+    AutomemberRebuildMembershipTask(supplier).create(properties={
         'basedn': auto_mem_scope,
         'filter': "objectClass=inetOrgPerson"})
     time.sleep(2)
-    bulk_check_groups(master, managers_grp, "member", 10)
-    AutoMembershipDefinition(master,
+    bulk_check_groups(supplier, managers_grp, "member", 10)
+    AutoMembershipDefinition(supplier,
                              f'cn=userGroups,{PLUGIN_AUTO}').replace('autoMemberFilter',
                                                                      "objectclass=posixAccount")
-    master.restart()
+    supplier.restart()
 
 
 def test_automemtask_run_export(topo, _create_all_entries, _startuptask, _fixture_for_build_task):
@@ -1142,35 +1142,35 @@ def test_automemtask_run_export(topo, _create_all_entries, _startuptask, _fixtur
         2. Should success
         3. Should success
     """
-    master = topo.ms['master1']
-    p = Paths('master1')
+    supplier = topo.ms['supplier1']
+    p = Paths('supplier1')
     testid = "autoMemTask_07"
     auto_mem_scope = "ou=TaskEmployees,{}".format(BASE_SUFF)
     managers_grp = "cn=Managers,ou=userGroups,{}".format(BASE_SUFF)
     user_rdn = "User_{}".format(testid)
     # Disabling plugin
-    AutoMembershipPlugin(master).disable()
-    master.restart()
+    AutoMembershipPlugin(supplier).disable()
+    supplier.restart()
     for number in range(10):
         add_user(topo, f'{user_rdn}{number}', auto_mem_scope, '222', '222', "Manager")
-    for user in nsAdminGroups(master, auto_mem_scope, rdn=None).list():
+    for user in nsAdminGroups(supplier, auto_mem_scope, rdn=None).list():
         user.add('objectclass', 'inetOrgPerson')
-    AutoMembershipDefinition(master, f'cn=userGroups,{PLUGIN_AUTO}').replace('autoMemberFilter',
+    AutoMembershipDefinition(supplier, f'cn=userGroups,{PLUGIN_AUTO}').replace('autoMemberFilter',
                                                                              "objectclass=inetOrgPerson")
-    master.restart()
+    supplier.restart()
     # Enabling plugin
-    AutoMembershipPlugin(master).enable()
-    master.restart()
+    AutoMembershipPlugin(supplier).enable()
+    supplier.restart()
     with pytest.raises(AssertionError):
-        bulk_check_groups(master, managers_grp, "member", 10)
+        bulk_check_groups(supplier, managers_grp, "member", 10)
     export_ldif = p.backup_dir + "/Out_Export_02.ldif"
     if os.path.exists(export_ldif):
         os.remove(export_ldif)
-    exp_task = Tasks(master)
+    exp_task = Tasks(supplier)
     exp_task.automemberExport(suffix=auto_mem_scope, fstr='objectclass=inetOrgPerson', ldif_out=export_ldif)
     check_file_exists(export_ldif)
     ldif_check_groups("cn={}".format(user_rdn), "member", 10, export_ldif)
-    AutoMembershipDefinition(master, f'cn=userGroups,{PLUGIN_AUTO}').\
+    AutoMembershipDefinition(supplier, f'cn=userGroups,{PLUGIN_AUTO}').\
         replace('autoMemberFilter', "objectclass=posixAccount")
 
 

+ 1 - 1
dirsrvtests/tests/suites/basic/basic_test.py

@@ -927,7 +927,7 @@ def test_basic_ldapagent(topology_st, import_example_ldif):
     config_file = os.path.join(topology_st.standalone.get_sysconf_dir(), 'dirsrv/config/agent.conf')
 
     agent_config_file = open(config_file, 'w')
-    agent_config_file.write('agentx-master ' + var_dir + '/agentx/master\n')
+    agent_config_file.write('agentx-supplier ' + var_dir + '/agentx/supplier\n')
     agent_config_file.write('agent-logdir ' + var_dir + '/log/dirsrv\n')
     agent_config_file.write('server slapd-' + topology_st.standalone.serverid + '\n')
     agent_config_file.close()

+ 5 - 5
dirsrvtests/tests/suites/clu/repl_monitor_test.py

@@ -82,7 +82,7 @@ def get_hostnames_from_log(port1, port2):
     host_m1 = 'localhost.localdomain'
     if (match is not None):
         host_m1 = match.group(2)
-    # Same for master 2 
+    # Same for supplier 2 
     regexp = '(Supplier: )([^:]*)(:' + str(port2) + '\D)'
     match=re.search(regexp, logtext)
     host_m2 = 'localhost.localdomain'
@@ -114,11 +114,11 @@ def test_dsconf_replication_monitor(topology_m2, set_log_file):
          6. Success
     """
 
-    m1 = topology_m2.ms["master1"]
-    m2 = topology_m2.ms["master2"]
+    m1 = topology_m2.ms["supplier1"]
+    m2 = topology_m2.ms["supplier2"]
 
     # Enable ldapi if not already done.
-    for inst in [topology_m2.ms["master1"], topology_m2.ms["master2"]]:
+    for inst in [topology_m2.ms["supplier1"], topology_m2.ms["supplier2"]]:
         if not inst.can_autobind():
             # Update ns-slapd instance
             inst.config.set('nsslapd-ldapilisten', 'on')
@@ -256,7 +256,7 @@ def test_dsconf_replication_monitor(topology_m2, set_log_file):
     args.aliases = None
     args.json = False
     # args needed to generate an instance with dsrc_arg_concat
-    args.instance = 'master1'
+    args.instance = 'supplier1'
     args.basedn = None
     args.binddn = None
     args.bindpw = None

+ 28 - 28
dirsrvtests/tests/suites/config/config_test.py

@@ -65,49 +65,49 @@ def test_maxbersize_repl(topology_m2, big_file):
     """maxbersize is ignored in the replicated operations.
 
     :id: ad57de60-7d56-4323-bbca-5556e5cdb126
-    :setup: MMR with two masters, test user,
+    :setup: MMR with two suppliers, test user,
             1 MiB big value for any attribute
     :steps:
-        1. Set maxbersize attribute to a small value (20KiB) on master2
-        2. Add the big value to master2
-        3. Add the big value to master1
-        4. Check if the big value was successfully replicated to master2
+        1. Set maxbersize attribute to a small value (20KiB) on supplier2
+        2. Add the big value to supplier2
+        3. Add the big value to supplier1
+        4. Check if the big value was successfully replicated to supplier2
     :expectedresults:
         1. maxbersize should be successfully set
-        2. Adding the big value to master2 failed
-        3. Adding the big value to master1 succeed
-        4. The big value is successfully replicated to master2
+        2. Adding the big value to supplier2 failed
+        3. Adding the big value to supplier1 succeed
+        4. The big value is successfully replicated to supplier2
     """
 
-    users_m1 = UserAccounts(topology_m2.ms["master1"], DEFAULT_SUFFIX)
-    users_m2 = UserAccounts(topology_m2.ms["master2"], DEFAULT_SUFFIX)
+    users_m1 = UserAccounts(topology_m2.ms["supplier1"], DEFAULT_SUFFIX)
+    users_m2 = UserAccounts(topology_m2.ms["supplier2"], DEFAULT_SUFFIX)
 
     user_m1 = users_m1.create(properties=TEST_USER_PROPERTIES)
     time.sleep(2)
     user_m2 = users_m2.get(dn=user_m1.dn)
 
-    log.info("Set nsslapd-maxbersize: 20K to master2")
-    topology_m2.ms["master2"].config.set('nsslapd-maxbersize', '20480')
+    log.info("Set nsslapd-maxbersize: 20K to supplier2")
+    topology_m2.ms["supplier2"].config.set('nsslapd-maxbersize', '20480')
 
-    topology_m2.ms["master2"].restart()
+    topology_m2.ms["supplier2"].restart()
 
-    log.info('Try to add attribute with a big value to master2 - expect to FAIL')
+    log.info('Try to add attribute with a big value to supplier2 - expect to FAIL')
     with pytest.raises(ldap.SERVER_DOWN):
         user_m2.add('jpegphoto', big_file)
 
-    topology_m2.ms["master2"].restart()
-    topology_m2.ms["master1"].restart()
+    topology_m2.ms["supplier2"].restart()
+    topology_m2.ms["supplier1"].restart()
 
-    log.info('Try to add attribute with a big value to master1 - expect to PASS')
+    log.info('Try to add attribute with a big value to supplier1 - expect to PASS')
     user_m1.add('jpegphoto', big_file)
 
     time.sleep(2)
 
-    log.info('Check if a big value was successfully added to master1')
+    log.info('Check if a big value was successfully added to supplier1')
 
     photo_m1 = user_m1.get_attr_vals('jpegphoto')
 
-    log.info('Check if a big value was successfully replicated to master2')
+    log.info('Check if a big value was successfully replicated to supplier2')
     photo_m2 = user_m2.get_attr_vals('jpegphoto')
 
     assert photo_m2 == photo_m1
@@ -116,7 +116,7 @@ def test_config_listen_backport_size(topology_m2):
     """Check that nsslapd-listen-backlog-size acted as expected
 
     :id: a4385d58-a6ab-491e-a604-6df0e8ed91cd
-    :setup: MMR with two masters
+    :setup: MMR with two suppliers
     :steps:
         1. Search for nsslapd-listen-backlog-size
         2. Set nsslapd-listen-backlog-size to a positive value
@@ -131,23 +131,23 @@ def test_config_listen_backport_size(topology_m2):
         5. nsslapd-listen-backlog-size should be successfully set
     """
 
-    default_val = topology_m2.ms["master1"].config.get_attr_val_bytes('nsslapd-listen-backlog-size')
+    default_val = topology_m2.ms["supplier1"].config.get_attr_val_bytes('nsslapd-listen-backlog-size')
 
-    topology_m2.ms["master1"].config.replace('nsslapd-listen-backlog-size', '256')
+    topology_m2.ms["supplier1"].config.replace('nsslapd-listen-backlog-size', '256')
 
-    topology_m2.ms["master1"].config.replace('nsslapd-listen-backlog-size', '-1')
+    topology_m2.ms["supplier1"].config.replace('nsslapd-listen-backlog-size', '-1')
 
     with pytest.raises(ldap.LDAPError):
-        topology_m2.ms["master1"].config.replace('nsslapd-listen-backlog-size', 'ZZ')
+        topology_m2.ms["supplier1"].config.replace('nsslapd-listen-backlog-size', 'ZZ')
 
-    topology_m2.ms["master1"].config.replace('nsslapd-listen-backlog-size', default_val)
+    topology_m2.ms["supplier1"].config.replace('nsslapd-listen-backlog-size', default_val)
 
 
 def test_config_deadlock_policy(topology_m2):
     """Check that nsslapd-db-deadlock-policy acted as expected
 
     :id: a24e25fd-bc15-47fa-b018-372f6a2ec59c
-    :setup: MMR with two masters
+    :setup: MMR with two suppliers
     :steps:
         1. Search for nsslapd-db-deadlock-policy and check if
            it contains a default value
@@ -165,8 +165,8 @@ def test_config_deadlock_policy(topology_m2):
 
     default_val = b'9'
 
-    ldbmconfig = LDBMConfig(topology_m2.ms["master1"])
-    bdbconfig = BDB_LDBMConfig(topology_m2.ms["master1"])
+    ldbmconfig = LDBMConfig(topology_m2.ms["supplier1"])
+    bdbconfig = BDB_LDBMConfig(topology_m2.ms["supplier1"])
 
     if ds_is_older('1.4.2'):
         deadlock_policy = ldbmconfig.get_attr_val_bytes('nsslapd-db-deadlock-policy')

+ 1 - 1
dirsrvtests/tests/suites/config/regression_test.py

@@ -79,7 +79,7 @@ def test_maxbersize_repl(topo):
     nsslapd-errorlog-logmaxdiskspace are set in certain order
 
     :id: 743e912c-2be4-4f5f-9c2a-93dcb18f51a0
-    :setup: MMR with two masters
+    :setup: MMR with two suppliers
     :steps:
         1. Stop the instance
         2. Set nsslapd-errorlog-maxlogsize before/after

+ 52 - 52
dirsrvtests/tests/suites/ds_tools/replcheck_test.py

@@ -45,12 +45,12 @@ def _delete_container(cont):
 
 @pytest.fixture(scope="module")
 def topo_tls_ldapi(topo):
-    """Enable TLS on both masters and reconfigure both agreements
+    """Enable TLS on both suppliers and reconfigure both agreements
     to use TLS Client auth. Also, setup ldapi and export DB
     """
 
-    m1 = topo.ms["master1"]
-    m2 = topo.ms["master2"]
+    m1 = topo.ms["supplier1"]
+    m2 = topo.ms["supplier2"]
     # Create the certmap before we restart for enable_tls
     cm_m1 = CertmapLegacy(m1)
     cm_m2 = CertmapLegacy(m2)
@@ -117,8 +117,8 @@ def replcheck_cmd_list(topo_tls_ldapi):
     and compare exported ldif files
     """
 
-    m1 = topo_tls_ldapi.ms["master1"]
-    m2 = topo_tls_ldapi.ms["master2"]
+    m1 = topo_tls_ldapi.ms["supplier1"]
+    m2 = topo_tls_ldapi.ms["supplier2"]
 
     for inst in topo_tls_ldapi:
         inst.stop()
@@ -169,7 +169,7 @@ def test_state(topo_tls_ldapi):
 
     :id: 1cc6b28b-8a42-45fb-ab50-9552db0ac178
     :customerscenario: True
-    :setup: Two master replication
+    :setup: Two supplier replication
     :steps:
         1. Get the replication state value
         2. The state value is as expected
@@ -177,15 +177,15 @@ def test_state(topo_tls_ldapi):
         1. It should be successful
         2. It should be successful
     """
-    m1 = topo_tls_ldapi.ms["master1"]
-    m2 = topo_tls_ldapi.ms["master2"]
+    m1 = topo_tls_ldapi.ms["supplier1"]
+    m2 = topo_tls_ldapi.ms["supplier2"]
     ds_replcheck_path = os.path.join(m1.ds_paths.bin_dir, 'ds-replcheck')
 
     tool_cmd = [ds_replcheck_path, 'state', '-b', DEFAULT_SUFFIX, '-D', DN_DM, '-w', PW_DM,
                 '-m', 'ldaps://{}:{}'.format(m1.host, m1.sslport),
                 '-r', 'ldaps://{}:{}'.format(m2.host, m2.sslport)]
     result = subprocess.check_output(tool_cmd, encoding='utf-8')
-    assert (result.rstrip() == "Replication State: Master and Replica are in perfect synchronization")
+    assert (result.rstrip() == "Replication State: Supplier and Replica are in perfect synchronization")
 
 
 def test_check_ruv(topo_tls_ldapi):
@@ -193,9 +193,9 @@ def test_check_ruv(topo_tls_ldapi):
 
     :id: 1cc6b28b-8a42-45fb-ab50-9552db0ac179
     :customerscenario: True
-    :setup: Two master replication
+    :setup: Two supplier replication
     :steps:
-        1. Get RUV from master and replica
+        1. Get RUV from supplier and replica
         2. Generate the report
         3. Check that the RUV is mentioned in the report
     :expectedresults:
@@ -204,7 +204,7 @@ def test_check_ruv(topo_tls_ldapi):
         3. The RUV should be mentioned in the report
     """
 
-    m1 = topo_tls_ldapi.ms["master1"]
+    m1 = topo_tls_ldapi.ms["supplier1"]
 
     replicas_m1 = Replica(m1, DEFAULT_SUFFIX)
     ruv_entries = replicas_m1.get_attr_vals_utf8('nsds50ruv')
@@ -219,10 +219,10 @@ def test_missing_entries(topo_tls_ldapi):
 
     :id: f91b6798-6e6e-420a-ad2f-3222bb908b7d
     :customerscenario: True
-    :setup: Two master replication
+    :setup: Two supplier replication
     :steps:
-        1. Pause replication between master and replica
-        2. Add two entries to master and two entries to replica
+        1. Pause replication between supplier and replica
+        2. Add two entries to supplier and two entries to replica
         3. Generate the report
         4. Check that the entries DN are mentioned in the report
     :expectedresults:
@@ -232,8 +232,8 @@ def test_missing_entries(topo_tls_ldapi):
         4. The entries DN should be mentioned in the report
     """
 
-    m1 = topo_tls_ldapi.ms["master1"]
-    m2 = topo_tls_ldapi.ms["master2"]
+    m1 = topo_tls_ldapi.ms["supplier1"]
+    m2 = topo_tls_ldapi.ms["supplier2"]
 
     try:
         topo_tls_ldapi.pause_all_replicas()
@@ -261,11 +261,11 @@ def test_tombstones(topo_tls_ldapi):
 
     :id: bd27de78-0046-431c-8240-a93052df1cdc
     :customerscenario: True
-    :setup: Two master replication
+    :setup: Two supplier replication
     :steps:
-        1. Add an entry to master and wait for replication
-        2. Pause replication between master and replica
-        3. Delete the entry from master
+        1. Add an entry to supplier and wait for replication
+        2. Pause replication between supplier and replica
+        3. Delete the entry from supplier
         4. Generate the report
         5. Check that we have different number of tombstones in the report
     :expectedresults:
@@ -276,7 +276,7 @@ def test_tombstones(topo_tls_ldapi):
         5. It should be successful
     """
 
-    m1 = topo_tls_ldapi.ms["master1"]
+    m1 = topo_tls_ldapi.ms["supplier1"]
 
     try:
         users_m1 = UserAccounts(m1, DEFAULT_SUFFIX)
@@ -298,13 +298,13 @@ def test_conflict_entries(topo_tls_ldapi):
 
     :id: 4eda0c5d-0824-4cfd-896e-845faf49ddaf
     :customerscenario: True
-    :setup: Two master replication
+    :setup: Two supplier replication
     :steps:
-        1. Pause replication between master and replica
-        2. Add two entries to master and two entries to replica
-        3. Delete first entry from master
+        1. Pause replication between supplier and replica
+        2. Add two entries to supplier and two entries to replica
+        3. Delete first entry from supplier
         4. Add a child to the first entry
-        5. Resume replication between master and replica
+        5. Resume replication between supplier and replica
         6. Generate the report
         7. Check that the entries DN are mentioned in the report
     :expectedresults:
@@ -317,8 +317,8 @@ def test_conflict_entries(topo_tls_ldapi):
         7. The entries DN should be mentioned in the report
     """
 
-    m1 = topo_tls_ldapi.ms["master1"]
-    m2 = topo_tls_ldapi.ms["master2"]
+    m1 = topo_tls_ldapi.ms["supplier1"]
+    m2 = topo_tls_ldapi.ms["supplier2"]
 
     topo_tls_ldapi.pause_all_replicas()
 
@@ -342,12 +342,12 @@ def test_inconsistencies(topo_tls_ldapi):
 
     :id: c8fe3e84-b346-4969-8f5d-3462b643a1d2
     :customerscenario: True
-    :setup: Two master replication
+    :setup: Two supplier replication
     :steps:
-        1. Add an entry to master and wait for replication
-        2. Pause replication between master and replica
-        3. Set different description attr values to master and replica
-        4. Add telephoneNumber attribute to master and not to replica
+        1. Add an entry to supplier and wait for replication
+        2. Pause replication between supplier and replica
+        3. Set different description attr values to supplier and replica
+        4. Add telephoneNumber attribute to supplier and not to replica
         5. Generate the report
         6. Check that attribute values are mentioned in the report
         7. Generate the report with -i option to ignore some attributes
@@ -363,8 +363,8 @@ def test_inconsistencies(topo_tls_ldapi):
         8. The attribute values should not be mentioned in the report
     """
 
-    m1 = topo_tls_ldapi.ms["master1"]
-    m2 = topo_tls_ldapi.ms["master2"]
+    m1 = topo_tls_ldapi.ms["supplier1"]
+    m2 = topo_tls_ldapi.ms["supplier2"]
     attr_m1 = "m1_inconsistency"
     attr_m2 = "m2_inconsistency"
     attr_first = "first ordered valued"
@@ -415,14 +415,14 @@ def test_suffix_exists(topo_tls_ldapi):
 
     :id: ce75debc-c07f-4e72-8787-8f99cbfaf1e2
     :customerscenario: True
-    :setup: Two master replication
+    :setup: Two supplier replication
     :steps:
         1. Run ds-replcheck with wrong suffix (Non Existing)
     :expectedresults:
         1. It should be unsuccessful
     """
-    m1 = topo_tls_ldapi.ms["master1"]
-    m2 = topo_tls_ldapi.ms["master2"]
+    m1 = topo_tls_ldapi.ms["supplier1"]
+    m2 = topo_tls_ldapi.ms["supplier2"]
     ds_replcheck_path = os.path.join(m1.ds_paths.bin_dir, 'ds-replcheck')
 
     if ds_is_newer("1.4.1.2"):
@@ -444,10 +444,10 @@ def test_check_missing_tombstones(topo_tls_ldapi):
 
     :id: 93067a5a-416e-4243-9418-c4dfcf42e093
     :customerscenario: True
-    :setup: Two master replication
+    :setup: Two supplier replication
     :steps:
-        1. Pause replication between master and replica
-        2. Add and delete an entry on the master
+        1. Pause replication between supplier and replica
+        2. Add and delete an entry on the supplier
         3. Run ds-replcheck
         4. Verify there are NO complaints about missing entries/tombstones
     :expectedresults:
@@ -456,8 +456,8 @@ def test_check_missing_tombstones(topo_tls_ldapi):
         3. It should be successful
         4. It should be successful
     """
-    m1 = topo_tls_ldapi.ms["master1"]
-    m2 = topo_tls_ldapi.ms["master2"]
+    m1 = topo_tls_ldapi.ms["supplier1"]
+    m2 = topo_tls_ldapi.ms["supplier2"]
 
     try:
         topo_tls_ldapi.pause_all_replicas()
@@ -478,7 +478,7 @@ def test_dsreplcheck_with_password_file(topo_tls_ldapi, tmpdir):
 
     :id: 0d847ec7-6eaf-4cb5-a9c6-e4a5a1778f93
     :customerscenario: True
-    :setup: Two master replication
+    :setup: Two supplier replication
     :steps:
         1. Create a password file with the default password of the server.
         2. Run ds-replcheck with -y option (used to pass password file)
@@ -486,8 +486,8 @@ def test_dsreplcheck_with_password_file(topo_tls_ldapi, tmpdir):
         1. It should be successful
         2. It should be successful
     """
-    m1 = topo_tls_ldapi.ms["master1"]
-    m2 = topo_tls_ldapi.ms["master2"]
+    m1 = topo_tls_ldapi.ms["supplier1"]
+    m2 = topo_tls_ldapi.ms["supplier2"]
 
     ds_replcheck_path = os.path.join(m1.ds_paths.bin_dir, 'ds-replcheck')
     f = tmpdir.mkdir("my_dir").join("password_file.txt")
@@ -513,19 +513,19 @@ def test_dsreplcheck_timeout_connection_mechanisms(topo_tls_ldapi):
 
     :id: aeeb99c9-09e2-45dc-bd75-9f95409babe7
     :customerscenario: True
-    :setup: Two master replication
+    :setup: Two supplier replication
     :steps:
-        1. Create two masters with various connection mechanisms configured
+        1. Create two suppliers with various connection mechanisms configured
         2. Run ds-replcheck with -t option
     :expectedresults:
         1. Success
         2. Success
     """
 
-    OUTPUT = 'Master and Replica are in perfect synchronization'
+    OUTPUT = 'Supplier and Replica are in perfect synchronization'
 
-    m1 = topo_tls_ldapi.ms["master1"]
-    m2 = topo_tls_ldapi.ms["master2"]
+    m1 = topo_tls_ldapi.ms["supplier1"]
+    m2 = topo_tls_ldapi.ms["supplier2"]
 
     ds_replcheck_path = os.path.join(m1.ds_paths.bin_dir, 'ds-replcheck')
 

+ 12 - 12
dirsrvtests/tests/suites/dynamic_plugins/dynamic_plugins_test.py

@@ -31,8 +31,8 @@ log = logging.getLogger(__name__)
 def check_replicas(topology_m2):
     """Check that replication is in sync and working"""
 
-    m1 = topology_m2.ms["master1"]
-    m2 = topology_m2.ms["master2"]
+    m1 = topology_m2.ms["supplier1"]
+    m2 = topology_m2.ms["supplier2"]
 
     log.info('Checking if replication is in sync...')
     repl = ReplicationManager(DEFAULT_SUFFIX)
@@ -42,16 +42,16 @@ def check_replicas(topology_m2):
     #
     log.info('Checking if the data is the same between the replicas...')
 
-    # Check the master
+    # Check the supplier
     try:
         entries = m1.search_s(DEFAULT_SUFFIX,
                               ldap.SCOPE_SUBTREE,
                               "(|(uid=person*)(uid=entry*)(uid=employee*))")
         if len(entries) > 0:
-            log.error('Master database has incorrect data set!\n')
+            log.error('Supplier database has incorrect data set!\n')
             assert False
     except ldap.LDAPError as e:
-        log.fatal('Unable to search db on master: ' + e.message['desc'])
+        log.fatal('Unable to search db on supplier: ' + e.message['desc'])
         assert False
 
     # Check the consumer
@@ -60,7 +60,7 @@ def check_replicas(topology_m2):
                               ldap.SCOPE_SUBTREE,
                               "(|(uid=person*)(uid=entry*)(uid=employee*))")
         if len(entries) > 0:
-            log.error('Consumer database in not consistent with master database')
+            log.error('Consumer database in not consistent with supplier database')
             assert False
     except ldap.LDAPError as e:
         log.fatal('Unable to search db on consumer: ' + e.message['desc'])
@@ -74,7 +74,7 @@ def test_acceptance(topology_m2):
     changing the configuration without restarting the server.
 
     :id: 96136538-0151-4b09-9933-0e0cbf2c786c
-    :setup: 2 Master Instances
+    :setup: 2 Supplier Instances
     :steps:
         1. Pause all replication
         2. Set nsslapd-dynamic-plugins to on
@@ -93,7 +93,7 @@ def test_acceptance(topology_m2):
         7. Success
     """
 
-    m1 = topology_m2.ms["master1"]
+    m1 = topology_m2.ms["supplier1"]
     msg = ' (no replication)'
     replication_run = False
 
@@ -146,7 +146,7 @@ def test_memory_corruption(topology_m2):
     dynamic plugins option is enabled
 
     :id: 96136538-0151-4b09-9933-0e0cbf2c7862
-    :setup: 2 Master Instances
+    :setup: 2 Supplier Instances
     :steps:
         1. Pause all replication
         2. Set nsslapd-dynamic-plugins to on
@@ -171,7 +171,7 @@ def test_memory_corruption(topology_m2):
     """
 
 
-    m1 = topology_m2.ms["master1"]
+    m1 = topology_m2.ms["supplier1"]
     msg = ' (no replication)'
     replication_run = False
 
@@ -247,7 +247,7 @@ def test_stress(topology_m2):
     """Test plugins while under a big load. Perform the test 5 times
 
     :id: 96136538-0151-4b09-9933-0e0cbf2c7863
-    :setup: 2 Master Instances
+    :setup: 2 Supplier Instances
     :steps:
         1. Pause all replication
         2. Set nsslapd-dynamic-plugins to on
@@ -288,7 +288,7 @@ def test_stress(topology_m2):
         17. Success
     """
 
-    m1 = topology_m2.ms["master1"]
+    m1 = topology_m2.ms["supplier1"]
     msg = ' (no replication)'
     replication_run = False
     stress_max_runs = 5

+ 2 - 2
dirsrvtests/tests/suites/entryuuid/replicated_test.py

@@ -40,8 +40,8 @@ def test_entryuuid_with_replication(topo_m2):
         1. Success
     """
 
-    server_a = topo_m2.ms["master1"]
-    server_b = topo_m2.ms["master2"]
+    server_a = topo_m2.ms["supplier1"]
+    server_b = topo_m2.ms["supplier2"]
     server_a.config.loglevel(vals=(ErrorLog.DEFAULT,ErrorLog.TRACE))
     server_b.config.loglevel(vals=(ErrorLog.DEFAULT,ErrorLog.TRACE))
 

+ 115 - 115
dirsrvtests/tests/suites/fourwaymmr/fourwaymmr_test.py

@@ -20,7 +20,7 @@ pytestmark = pytest.mark.tier2
 
 @pytest.fixture(scope="function")
 def _cleanupentris(request, topo_m4):
-    users = UserAccounts(topo_m4.ms["master1"], DEFAULT_SUFFIX)
+    users = UserAccounts(topo_m4.ms["supplier1"], DEFAULT_SUFFIX)
     for i in range(10): users.create_test_user(uid=i)
 
     def fin():
@@ -32,30 +32,30 @@ def _cleanupentris(request, topo_m4):
 
 
 def test_verify_trees(topo_m4):
-    """All 4 masters should have consistent data
+    """All 4 suppliers should have consistent data
 
     :id: 01733ef8-e764-11e8-98f3-8c16451d917b
     :setup: 4 Instances with replication
     :steps:
-        1. All 4 masters should have consistent data now
+        1. All 4 suppliers should have consistent data now
     :expected results:
         1. Should success
     """
-    # all 4 masters should have consistent data now
+    # all 4 suppliers should have consistent data now
     repl = ReplicationManager(DEFAULT_SUFFIX)
     repl.test_replication(
-        topo_m4.ms["master1"], topo_m4.ms["master2"], 30
+        topo_m4.ms["supplier1"], topo_m4.ms["supplier2"], 30
     )
     repl.test_replication(
-        topo_m4.ms["master1"], topo_m4.ms["master3"], 30
+        topo_m4.ms["supplier1"], topo_m4.ms["supplier3"], 30
     )
     repl.test_replication(
-        topo_m4.ms["master1"], topo_m4.ms["master4"], 30
+        topo_m4.ms["supplier1"], topo_m4.ms["supplier4"], 30
     )
 
 
-def test_sync_through_to_all_4_masters(topo_m4, _cleanupentris):
-    """Insert fresh data into Master 2 - about 10 entries
+def test_sync_through_to_all_4_suppliers(topo_m4, _cleanupentris):
+    """Insert fresh data into Supplier 2 - about 10 entries
 
     :id: 10917e04-e764-11e8-8367-8c16451d917b
     :setup: 4 Instances with replication
@@ -67,54 +67,54 @@ def test_sync_through_to_all_4_masters(topo_m4, _cleanupentris):
         2. Should success
     """
     # Insert fresh data into M2 - about 100 entries
-    # Wait for a minute for data to sync through to all 4 masters
+    # Wait for a minute for data to sync through to all 4 suppliers
     # Begin verification process
     repl = ReplicationManager(DEFAULT_SUFFIX)
     repl.test_replication(
-        topo_m4.ms["master1"], topo_m4.ms["master2"], 30
+        topo_m4.ms["supplier1"], topo_m4.ms["supplier2"], 30
     )
     repl.test_replication(
-        topo_m4.ms["master1"], topo_m4.ms["master3"], 30
+        topo_m4.ms["supplier1"], topo_m4.ms["supplier3"], 30
     )
     repl.test_replication(
-        topo_m4.ms["master1"], topo_m4.ms["master4"], 30
+        topo_m4.ms["supplier1"], topo_m4.ms["supplier4"], 30
     )
 
 
 def test_modify_some_data_in_m3(topo_m4):
-    """Modify some data in Master 3 , check trees on all 4 masters
+    """Modify some data in Supplier 3 , check trees on all 4 suppliers
 
     :id: 33583ff4-e764-11e8-8491-8c16451d917b
     :setup: 4 Instances with replication
     :steps:
-        1. Modify some data in M3 , wait for 20 seconds ,check trees on all 4 masters
+        1. Modify some data in M3 , wait for 20 seconds ,check trees on all 4 suppliers
     :expected results:
         1. Should success
     """
     # modify some data in M3
     # wait for 20 seconds
-    # check trees on all 4 masters
-    users = UserAccounts(topo_m4.ms["master3"], DEFAULT_SUFFIX)
+    # check trees on all 4 suppliers
+    users = UserAccounts(topo_m4.ms["supplier3"], DEFAULT_SUFFIX)
     repl = ReplicationManager(DEFAULT_SUFFIX)
     for i in range(15, 20):
         users.create_test_user(uid=i)
         time.sleep(1)
     for i in range(15, 20):users.list()[19-i].set("description", "description for user{} CHANGED".format(i))
     repl.test_replication(
-        topo_m4.ms["master3"], topo_m4.ms["master1"], 30
+        topo_m4.ms["supplier3"], topo_m4.ms["supplier1"], 30
     )
     repl.test_replication(
-        topo_m4.ms["master3"], topo_m4.ms["master2"], 30
+        topo_m4.ms["supplier3"], topo_m4.ms["supplier2"], 30
     )
     repl.test_replication(
-        topo_m4.ms["master3"], topo_m4.ms["master4"], 30
+        topo_m4.ms["supplier3"], topo_m4.ms["supplier4"], 30
     )
     for i in users.list():
         i.delete()
 
 
 def test_delete_a_few_entries_in_m4(topo_m4, _cleanupentris):
-    """Delete a few entries in Master 4 , verify trees.
+    """Delete a few entries in Supplier 4 , verify trees.
 
     :id: 6ea94d78-e764-11e8-987f-8c16451d917b
     :setup: 4 Instances with replication
@@ -130,19 +130,19 @@ def test_delete_a_few_entries_in_m4(topo_m4, _cleanupentris):
     # delete a few entries in M4
     # wait for 60 seconds for them to propagate
     # verify trees
-    users = UserAccounts(topo_m4.ms["master1"], DEFAULT_SUFFIX)
+    users = UserAccounts(topo_m4.ms["supplier1"], DEFAULT_SUFFIX)
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    repl.wait_for_replication(topo_m4.ms["master4"], topo_m4.ms["master1"])
+    repl.wait_for_replication(topo_m4.ms["supplier4"], topo_m4.ms["supplier1"])
     for i in users.list():
         i.delete()
     repl.test_replication(
-        topo_m4.ms["master4"], topo_m4.ms["master1"], 30
+        topo_m4.ms["supplier4"], topo_m4.ms["supplier1"], 30
     )
     repl.test_replication(
-        topo_m4.ms["master4"], topo_m4.ms["master2"], 30
+        topo_m4.ms["supplier4"], topo_m4.ms["supplier2"], 30
     )
     repl.test_replication(
-        topo_m4.ms["master4"], topo_m4.ms["master3"], 30
+        topo_m4.ms["supplier4"], topo_m4.ms["supplier3"], 30
     )
 
 
@@ -158,7 +158,7 @@ def test_replicated_multivalued_entries(topo_m4):
     """
     # This test case checks that replicated multivalued entries are
     # ordered the same way on all consumers
-    users = UserAccounts(topo_m4.ms["master1"], DEFAULT_SUFFIX)
+    users = UserAccounts(topo_m4.ms["supplier1"], DEFAULT_SUFFIX)
     repl = ReplicationManager(DEFAULT_SUFFIX)
     user_properties = {
         "uid": "test_replicated_multivalued_entries",
@@ -174,13 +174,13 @@ def test_replicated_multivalued_entries(topo_m4):
     testuser.set("mail", ["test1", "test2", "test3"])
     # Now we check the entry on each consumer, making sure the order of the
     # multi-valued mail attribute is the same on all server instances
-    repl.wait_for_replication(topo_m4.ms["master4"], topo_m4.ms["master1"])
-    assert topo_m4.ms["master1"].search_s("uid=test_replicated_multivalued_entries,ou=People,dc=example,dc=com",
+    repl.wait_for_replication(topo_m4.ms["supplier4"], topo_m4.ms["supplier1"])
+    assert topo_m4.ms["supplier1"].search_s("uid=test_replicated_multivalued_entries,ou=People,dc=example,dc=com",
                                           ldap.SCOPE_SUBTREE, '(objectclass=*)', ['mail']) == topo_m4.ms[
-               "master2"].search_s("uid=test_replicated_multivalued_entries,ou=People,dc=example,dc=com",
-                                   ldap.SCOPE_SUBTREE, '(objectclass=*)', ['mail']) == topo_m4.ms["master3"].search_s(
+               "supplier2"].search_s("uid=test_replicated_multivalued_entries,ou=People,dc=example,dc=com",
+                                   ldap.SCOPE_SUBTREE, '(objectclass=*)', ['mail']) == topo_m4.ms["supplier3"].search_s(
         "uid=test_replicated_multivalued_entries,ou=People,dc=example,dc=com", ldap.SCOPE_SUBTREE, '(objectclass=*)',
-        ['mail']) == topo_m4.ms["master4"].search_s(
+        ['mail']) == topo_m4.ms["supplier4"].search_s(
         "uid=test_replicated_multivalued_entries,ou=People,dc=example,dc=com", ldap.SCOPE_SUBTREE, '(objectclass=*)',
         ['mail'])
 
@@ -204,22 +204,22 @@ def test_bad_replication_agreement(topo_m4):
     for inst in topo_m4: inst.stop()
     for i in range(1, 5):
         if os.path.exists(
-            topo_m4.ms["master{}".format(i)].confdir
+            topo_m4.ms["supplier{}".format(i)].confdir
             + "/dse_test_bug157377.ldif"
         ):
             os.remove(
-                topo_m4.ms["master{}".format(i)].confdir
+                topo_m4.ms["supplier{}".format(i)].confdir
                 + "/dse_test_bug157377.ldif"
             )
         shutil.copy(
-            topo_m4.ms["master{}".format(i)].confdir + "/dse.ldif",
-            topo_m4.ms["master{}".format(i)].confdir
+            topo_m4.ms["supplier{}".format(i)].confdir + "/dse.ldif",
+            topo_m4.ms["supplier{}".format(i)].confdir
             + "/dse_test_bug157377.ldif",
         )
         with suppress(PermissionError):
-            os.chown('{}/dse_test_bug157377.ldif'.format(topo_m4.all_insts.get('master{}'.format(i)).confdir),
+            os.chown('{}/dse_test_bug157377.ldif'.format(topo_m4.all_insts.get('supplier{}'.format(i)).confdir),
                  pwd.getpwnam('dirsrv').pw_uid, grp.getgrnam('dirsrv').gr_gid)
-    for i in ["master1", "master2", "master3", "master4"]:
+    for i in ["supplier1", "supplier2", "supplier3", "supplier4"]:
         topo_m4.all_insts.get(i).start()
     # Create the bad replication agreement and try to add it
     # Its a agreement as Missing replica host and port information makes for a bad agreement.
@@ -233,7 +233,7 @@ def test_bad_replication_agreement(topo_m4):
         "description": "Ze_bad_agreement",
         "nsds5replicacredentials": "Secret123",
     }
-    for i in ["master1", "master2", "master3", "master4"]:
+    for i in ["supplier1", "supplier2", "supplier3", "supplier4"]:
         with pytest.raises(ldap.UNWILLING_TO_PERFORM):
             Agreement(topo_m4.all_insts.get("{}".format(i))).create(
                 properties=properties
@@ -242,12 +242,12 @@ def test_bad_replication_agreement(topo_m4):
     # Now retore the original dse.ldif
     for i in range(1, 5):
         shutil.copy(
-            topo_m4.ms["master{}".format(i)].confdir
+            topo_m4.ms["supplier{}".format(i)].confdir
             + "/dse_test_bug157377.ldif",
-            topo_m4.ms["master{}".format(i)].confdir + "/dse.ldif",
+            topo_m4.ms["supplier{}".format(i)].confdir + "/dse.ldif",
         )
         with suppress(PermissionError):
-            os.chown('{}/dse_test_bug157377.ldif'.format(topo_m4.all_insts.get('master{}'.format(i)).confdir),
+            os.chown('{}/dse_test_bug157377.ldif'.format(topo_m4.all_insts.get('supplier{}'.format(i)).confdir),
                  pwd.getpwnam('dirsrv').pw_uid, grp.getgrnam('dirsrv').gr_gid)
     for inst in topo_m4: inst.start()
 
@@ -272,19 +272,19 @@ def test_nsds5replicaenabled_verify(topo_m4):
     # Add the attribute nsds5ReplicaEnabled to cn=config
     # Stop M3 and M4 instances, as not required for this test
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    for i in ["master3", "master4"]:
+    for i in ["supplier3", "supplier4"]:
         topo_m4.all_insts.get(i).stop()
     # Adding nsds5ReplicaEnabled to M1
-    topo_m4.ms["master1"].modify_s(
-        topo_m4.ms["master1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
+    topo_m4.ms["supplier1"].modify_s(
+        topo_m4.ms["supplier1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
         [(ldap.MOD_ADD, "nsds5ReplicaEnabled", b"on")],
     )
-    topo_m4.ms["master1"].modify_s(
-        topo_m4.ms["master1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
+    topo_m4.ms["supplier1"].modify_s(
+        topo_m4.ms["supplier1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
         [(ldap.MOD_REPLACE, "nsds5ReplicaEnabled", b"off")],
     )
-    # Adding data to Master1
-    users = UserAccounts(topo_m4.ms["master1"], DEFAULT_SUFFIX)
+    # Adding data to Supplier1
+    users = UserAccounts(topo_m4.ms["supplier1"], DEFAULT_SUFFIX)
     user_properties = {
         "uid": "test_bug834074",
         "cn": "test_bug834074",
@@ -296,71 +296,71 @@ def test_nsds5replicaenabled_verify(topo_m4):
     }
     users.create(properties=user_properties)
     test_user_very = users.get("test_bug834074").dn
-    # No replication no data in Master2
+    # No replication no data in Supplier2
     with pytest.raises(Exception):
-        repl.wait_for_replication(topo_m4.ms["master1"], topo_m4.ms["master2"])
+        repl.wait_for_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"])
     # Replication on
-    topo_m4.ms["master1"].modify_s(
-        topo_m4.ms["master1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
+    topo_m4.ms["supplier1"].modify_s(
+        topo_m4.ms["supplier1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
         [(ldap.MOD_REPLACE, "nsds5ReplicaEnabled", b"on")],
     )
-    repl.wait_for_replication(topo_m4.ms["master1"], topo_m4.ms["master2"])
-    # Now data is available on master2
-    assert len(topo_m4.ms['master2'].search_s(test_user_very, ldap.SCOPE_SUBTREE, 'objectclass=*')) == 1
-    ## Stop replication to master2
-    topo_m4.ms["master1"].modify_s(
-        topo_m4.ms["master1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
+    repl.wait_for_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"])
+    # Now data is available on supplier2
+    assert len(topo_m4.ms['supplier2'].search_s(test_user_very, ldap.SCOPE_SUBTREE, 'objectclass=*')) == 1
+    ## Stop replication to supplier2
+    topo_m4.ms["supplier1"].modify_s(
+        topo_m4.ms["supplier1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
         [(ldap.MOD_REPLACE, "nsds5ReplicaEnabled", b"off")],
     )
-    # Modify some data in master1
-    topo_m4.ms["master1"].modrdn_s(test_user_very, 'uid=test_bug834075', 1)
+    # Modify some data in supplier1
+    topo_m4.ms["supplier1"].modrdn_s(test_user_very, 'uid=test_bug834075', 1)
     with pytest.raises(Exception):
-        repl.wait_for_replication(topo_m4.ms["master1"], topo_m4.ms["master2"])
-    # changes are not replicated in master2
-        with pytest.raises(Exception): topo_m4.ms['master2'].search_s(
+        repl.wait_for_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"])
+    # changes are not replicated in supplier2
+        with pytest.raises(Exception): topo_m4.ms['supplier2'].search_s(
             'uid=test_bug834075,ou=People,{}'.format(DEFAULT_SUFFIX), ldap.SCOPE_SUBTREE, 'objectclass=*')
     # Turn on the replication
-    topo_m4.ms["master1"].modify_s(
-        topo_m4.ms["master1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
+    topo_m4.ms["supplier1"].modify_s(
+        topo_m4.ms["supplier1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
         [(ldap.MOD_REPLACE, "nsds5ReplicaEnabled", b"on")],
     )
-    repl.wait_for_replication(topo_m4.ms["master1"], topo_m4.ms["master2"])
-    # Now same data is available in master2
+    repl.wait_for_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"])
+    # Now same data is available in supplier2
     assert len(
-        topo_m4.ms['master2'].search_s('uid=test_bug834075,ou=People,{}'.format(DEFAULT_SUFFIX), ldap.SCOPE_SUBTREE,
+        topo_m4.ms['supplier2'].search_s('uid=test_bug834075,ou=People,{}'.format(DEFAULT_SUFFIX), ldap.SCOPE_SUBTREE,
                                        'objectclass=*')) == 1
-    # Turn off the replication from master1 to master2
-    topo_m4.ms["master1"].modify_s(
-        topo_m4.ms["master1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
+    # Turn off the replication from supplier1 to supplier2
+    topo_m4.ms["supplier1"].modify_s(
+        topo_m4.ms["supplier1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
         [(ldap.MOD_REPLACE, "nsds5ReplicaEnabled", b"off")],
     )
-    # delete some data in master1
-    topo_m4.ms["master1"].delete_s(
+    # delete some data in supplier1
+    topo_m4.ms["supplier1"].delete_s(
         'uid=test_bug834075,ou=People,{}'.format(DEFAULT_SUFFIX)
     )
     with pytest.raises(Exception):
-        repl.wait_for_replication(topo_m4.ms["master1"], topo_m4.ms["master2"])
-    # deleted data from master1 is still there in master2 as repliaction is off
+        repl.wait_for_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"])
+    # deleted data from supplier1 is still there in supplier2 as repliaction is off
         assert len(
-            topo_m4.ms['master2'].search_s('uid=test_bug834075,ou=People,{}'.format(DEFAULT_SUFFIX), ldap.SCOPE_SUBTREE,
+            topo_m4.ms['supplier2'].search_s('uid=test_bug834075,ou=People,{}'.format(DEFAULT_SUFFIX), ldap.SCOPE_SUBTREE,
                                            'objectclass=*')) == 1
-    topo_m4.ms["master1"].modify_s(
-        topo_m4.ms["master1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
+    topo_m4.ms["supplier1"].modify_s(
+        topo_m4.ms["supplier1"].agreement.list(suffix=DEFAULT_SUFFIX)[0].dn,
         [(ldap.MOD_REPLACE, "nsds5ReplicaEnabled", b"on")],
     )
-    repl.wait_for_replication(topo_m4.ms["master1"], topo_m4.ms["master2"])
-    # After repliction is on same is gone from master2 also.
+    repl.wait_for_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"])
+    # After repliction is on same is gone from supplier2 also.
     with pytest.raises(ldap.NO_SUCH_OBJECT):
-        topo_m4.ms['master2'].search_s('uid=test_bug834075,ou=People,{}'.format(DEFAULT_SUFFIX), ldap.SCOPE_SUBTREE,
+        topo_m4.ms['supplier2'].search_s('uid=test_bug834075,ou=People,{}'.format(DEFAULT_SUFFIX), ldap.SCOPE_SUBTREE,
                                        'objectclass=*')
     with pytest.raises(ldap.OPERATIONS_ERROR):
-        topo_m4.ms["master1"].modify_s(
-            topo_m4.ms["master1"]
+        topo_m4.ms["supplier1"].modify_s(
+            topo_m4.ms["supplier1"]
             .agreement.list(suffix=DEFAULT_SUFFIX)[0]
             .dn,
             [(ldap.MOD_REPLACE, "nsds5ReplicaEnabled", b"invalid")],
         )
-    for i in ["master3", "master4"]:
+    for i in ["supplier3", "supplier4"]:
         topo_m4.all_insts.get(i).start()
 
 
@@ -376,26 +376,26 @@ def test_create_an_entry_on_the_supplier(topo_m4):
         1. Should not success
     """
     # Bug 830344: Shut down one instance and create an entry on the supplier
-    topo_m4.ms["master1"].stop()
-    users = UserAccounts(topo_m4.ms["master2"], DEFAULT_SUFFIX)
+    topo_m4.ms["supplier1"].stop()
+    users = UserAccounts(topo_m4.ms["supplier2"], DEFAULT_SUFFIX)
     users.create_test_user(uid=4)
     # ldapsearch output
     assert \
-    topo_m4.ms["master2"].search_s('cn=replica,cn="dc=example,dc=com",cn=mapping tree,cn=config', ldap.SCOPE_SUBTREE,
+    topo_m4.ms["supplier2"].search_s('cn=replica,cn="dc=example,dc=com",cn=mapping tree,cn=config', ldap.SCOPE_SUBTREE,
                                    "(objectclass=*)", ["nsds5replicaLastUpdateStatus"], )[1].getValue(
         'nsds5replicalastupdatestatus')
-    topo_m4.ms["master1"].start()
+    topo_m4.ms["supplier1"].start()
 
 
 @pytest.mark.bz923502
 def test_bob_acceptance_tests(topo_m4):
-    """Run multiple modrdn_s operation on master1
+    """Run multiple modrdn_s operation on supplier1
 
     :id: 26eb87f2-e765-11e8-9698-8c16451d917b
     :setup: standalone
     :steps:
         1. Add entry
-        2. Run multiple modrdn_s operation on master1
+        2. Run multiple modrdn_s operation on supplier1
         3. Check everything is fine.
     :expected results:
         1. Should  success
@@ -404,19 +404,19 @@ def test_bob_acceptance_tests(topo_m4):
     """
     # Bug description: run BOB acceptance tests...but it may be not systematic
     # Testing bug #923502: Crash in MODRDN
-    users = UserAccounts(topo_m4.ms["master1"], DEFAULT_SUFFIX)
+    users = UserAccounts(topo_m4.ms["supplier1"], DEFAULT_SUFFIX)
     repl = ReplicationManager(DEFAULT_SUFFIX)
     users.create_test_user()
     users.create_test_user(uid=2)
     for _ in range(100):
-        topo_m4.ms["master1"].modrdn_s("uid=test_user_1000,ou=People,{}".format(DEFAULT_SUFFIX), "uid=userB", 1)
-        topo_m4.ms["master1"].modrdn_s("uid=userB,ou=People,{}".format(DEFAULT_SUFFIX), "uid=test_user_1000", 1)
-    repl.wait_for_replication(topo_m4.ms["master1"], topo_m4.ms["master2"])
+        topo_m4.ms["supplier1"].modrdn_s("uid=test_user_1000,ou=People,{}".format(DEFAULT_SUFFIX), "uid=userB", 1)
+        topo_m4.ms["supplier1"].modrdn_s("uid=userB,ou=People,{}".format(DEFAULT_SUFFIX), "uid=test_user_1000", 1)
+    repl.wait_for_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"])
     for i in range(100):
-        topo_m4.ms["master2"].modrdn_s("uid=test_user_2,ou=People,{}".format(DEFAULT_SUFFIX), "uid=userB", 1)
-        topo_m4.ms["master2"].modrdn_s("uid=userB,ou=People,{}".format(DEFAULT_SUFFIX), "uid=test_user_2", 1)
-    assert topo_m4.ms["master1"].status() == True
-    assert topo_m4.ms["master2"].status() == True
+        topo_m4.ms["supplier2"].modrdn_s("uid=test_user_2,ou=People,{}".format(DEFAULT_SUFFIX), "uid=userB", 1)
+        topo_m4.ms["supplier2"].modrdn_s("uid=userB,ou=People,{}".format(DEFAULT_SUFFIX), "uid=test_user_2", 1)
+    assert topo_m4.ms["supplier1"].status() == True
+    assert topo_m4.ms["supplier2"].status() == True
 
 
 @pytest.mark.bz830335
@@ -427,8 +427,8 @@ def test_replica_backup_and_restore(topo_m4):
     :setup: standalone
     :steps:
         1. Add entries
-        2. Take backup db2ldif on master1
-        3. Delete entries on master1
+        2. Take backup db2ldif on supplier1
+        3. Delete entries on supplier1
         4. Restore entries ldif2db
         5. Check entries
     :expected results:
@@ -440,14 +440,14 @@ def test_replica_backup_and_restore(topo_m4):
     """
     # Testing bug #830335: Taking a replica backup and Restore on M1 after deleting few entries from M1 nad M2
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    users = UserAccounts(topo_m4.ms["master3"], DEFAULT_SUFFIX)
+    users = UserAccounts(topo_m4.ms["supplier3"], DEFAULT_SUFFIX)
     for i in range(20, 25):
         users.create_test_user(uid=i)
         time.sleep(1)
-    repl.wait_for_replication(topo_m4.ms["master1"], topo_m4.ms["master2"])
-    repl.test_replication(topo_m4.ms["master1"], topo_m4.ms["master2"], 30)
-    topo_m4.ms["master1"].stop()
-    topo_m4.ms["master1"].db2ldif(
+    repl.wait_for_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"])
+    repl.test_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"], 30)
+    topo_m4.ms["supplier1"].stop()
+    topo_m4.ms["supplier1"].db2ldif(
         bename=DEFAULT_BENAME,
         suffixes=[DEFAULT_SUFFIX],
         excludeSuffixes=[],
@@ -455,24 +455,24 @@ def test_replica_backup_and_restore(topo_m4):
         repl_data=True,
         outputfile="/tmp/output_file",
     )
-    topo_m4.ms["master1"].start()
-    for i in users.list(): topo_m4.ms["master1"].delete_s(i.dn)
-    repl.wait_for_replication(topo_m4.ms["master1"], topo_m4.ms["master2"])
-    repl.test_replication(topo_m4.ms["master1"], topo_m4.ms["master2"], 30)
-    topo_m4.ms["master1"].stop()
-    topo_m4.ms["master1"].ldif2db(
+    topo_m4.ms["supplier1"].start()
+    for i in users.list(): topo_m4.ms["supplier1"].delete_s(i.dn)
+    repl.wait_for_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"])
+    repl.test_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"], 30)
+    topo_m4.ms["supplier1"].stop()
+    topo_m4.ms["supplier1"].ldif2db(
         bename=None,
         excludeSuffixes=None,
         encrypt=False,
         suffixes=[DEFAULT_SUFFIX],
         import_file="/tmp/output_file",
     )
-    topo_m4.ms["master1"].start()
+    topo_m4.ms["supplier1"].start()
     for i in range(20, 25):
         users.create_test_user(uid=i)
         time.sleep(1)
-    repl.wait_for_replication(topo_m4.ms["master1"], topo_m4.ms["master2"])
-    repl.test_replication(topo_m4.ms["master1"], topo_m4.ms["master2"], 30)
+    repl.wait_for_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"])
+    repl.test_replication(topo_m4.ms["supplier1"], topo_m4.ms["supplier2"], 30)
 
 
 if __name__ == "__main__":

+ 49 - 49
dirsrvtests/tests/suites/fractional/fractional_test.py

@@ -24,7 +24,7 @@ import ldap
 
 pytestmark = pytest.mark.tier1
 
-MASTER1 = MASTER2 = CONSUMER1 = CONSUMER2 = None
+SUPPLIER1 = SUPPLIER2 = CONSUMER1 = CONSUMER2 = None
 
 
 def _create_users(instance, cn_cn, sn_sn, givenname, ou_ou, l_l, uid, mail,
@@ -55,8 +55,8 @@ def check_all_replicated():
     """
     Will check replication status
     """
-    for master in [MASTER2, CONSUMER1, CONSUMER2]:
-        ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(MASTER1, master, timeout=100)
+    for supplier in [SUPPLIER2, CONSUMER1, CONSUMER2]:
+        ReplicationManager(DEFAULT_SUFFIX).wait_for_replication(SUPPLIER1, supplier, timeout=100)
 
 
 @pytest.fixture(scope="module")
@@ -65,16 +65,16 @@ def _create_entries(topology_m2c2):
     A fixture that will create first test user and create fractional Agreement
     """
     # Defining as global , as same value will be used everywhere with same name.
-    global MASTER1, MASTER2, CONSUMER1, CONSUMER2
-    MASTER1 = topology_m2c2.ms['master1']
-    MASTER2 = topology_m2c2.ms['master2']
+    global SUPPLIER1, SUPPLIER2, CONSUMER1, CONSUMER2
+    SUPPLIER1 = topology_m2c2.ms['supplier1']
+    SUPPLIER2 = topology_m2c2.ms['supplier2']
     CONSUMER1 = topology_m2c2.cs['consumer1']
     CONSUMER2 = topology_m2c2.cs['consumer2']
-    users = UserAccounts(MASTER1, DEFAULT_SUFFIX)
+    users = UserAccounts(SUPPLIER1, DEFAULT_SUFFIX)
     _create_users(users, 'Sam Carter', 'Carter', 'Sam', ['Accounting', 'People'],
                   'Sunnyvale', 'scarter', '[email protected]', '+1 408 555 4798',
                   '+1 408 555 9751', '4612')
-    for ins, num in [(MASTER1, 1), (MASTER2, 2), (MASTER1, 2), (MASTER2, 1)]:
+    for ins, num in [(SUPPLIER1, 1), (SUPPLIER2, 2), (SUPPLIER1, 2), (SUPPLIER2, 1)]:
         Agreements(ins).list()[num].replace(
             'nsDS5ReplicatedAttributeList',
             '(objectclass=*) $ EXCLUDE audio businessCategory carLicense departmentNumber '
@@ -95,7 +95,7 @@ def test_fractional_agreements(_create_entries):
     agreements, but not with fractional agreements.
 
     :id: f22395e0-38ea-11ea-abe0-8c16451d917b
-    :setup: Master and Consumer
+    :setup: Supplier and Consumer
     :steps:
         1. Add test entry
         2. Search for an entry with disallowed attributes on every server.
@@ -111,8 +111,8 @@ def test_fractional_agreements(_create_entries):
     check_all_replicated()
     # Search for an entry with disallowed attributes on every server.
     for attr in ['telephonenumber', 'facsimiletelephonenumber', 'roomnumber']:
-        assert UserAccount(MASTER1, f'uid=scarter,ou=People,{DEFAULT_SUFFIX}').get_attr_val(attr)
-        assert UserAccount(MASTER2, f'uid=scarter,ou=People,{DEFAULT_SUFFIX}').get_attr_val(attr)
+        assert UserAccount(SUPPLIER1, f'uid=scarter,ou=People,{DEFAULT_SUFFIX}').get_attr_val(attr)
+        assert UserAccount(SUPPLIER2, f'uid=scarter,ou=People,{DEFAULT_SUFFIX}').get_attr_val(attr)
     # The attributes should be present on the two suppliers with
     # traditional replication agreements
     for attr in ['telephonenumber', 'facsimiletelephonenumber', 'roomnumber']:
@@ -126,7 +126,7 @@ def test_read_only_consumer(_create_entries):
     """Attempt to modify an entry on read-only consumer.
 
     :id: f97f0fea-38ea-11ea-a617-8c16451d917b
-    :setup: Master and Consumer
+    :setup: Supplier and Consumer
     :steps:
         1. Add test entry
         2. First attempt to modify an attribute that should be visible (mail)
@@ -153,7 +153,7 @@ def test_read_write_supplier(_create_entries):
     """Attempt to modify an entry on read-write supplier
 
     :id: ff50a8b6-38ea-11ea-870f-8c16451d917b
-    :setup: Master and Consumer
+    :setup: Supplier and Consumer
     :steps:
         1. Add test entry
         2. First attempt to modify an attribute that should be visible (mail)
@@ -167,13 +167,13 @@ def test_read_write_supplier(_create_entries):
         4. Success
     """
     # Add test entry
-    user_master1 = UserAccount(MASTER1, f'uid=scarter,ou=People,{DEFAULT_SUFFIX}')
+    user_supplier1 = UserAccount(SUPPLIER1, f'uid=scarter,ou=People,{DEFAULT_SUFFIX}')
     # First attempt to modify an attribute that should be visible (mail)
     for attr, value in [('mail', '[email protected]'), ('roomnumber', '123')]:
-        user_master1.replace(attr, value)
+        user_supplier1.replace(attr, value)
     check_all_replicated()
-    for ins, attr in [(MASTER2, 'mail'),
-                      (MASTER2, 'roomnumber'),
+    for ins, attr in [(SUPPLIER2, 'mail'),
+                      (SUPPLIER2, 'roomnumber'),
                       (CONSUMER1, 'mail'),
                       (CONSUMER2, 'mail')]:
         if attr == 'mail':
@@ -195,11 +195,11 @@ def test_filtered_attributes(_create_entries):
     """Filtered attributes are not replicated to CONSUMER1 or CONSUMER2.
 
     :id: 051b40ee-38eb-11ea-9126-8c16451d917b
-    :setup: Master and Consumer
+    :setup: Supplier and Consumer
     :steps:
-        1. Add a new entry to MASTER1.
+        1. Add a new entry to SUPPLIER1.
         2. Confirm that it is replicated in entirety
-           to MASTER2, but that filtered attributes are not replicated to
+           to SUPPLIER2, but that filtered attributes are not replicated to
            CONSUMER1 or CONSUMER2.
         3. The entry should be present in all servers.  Filtered attributes should not
            be available from the consumers with fractional replication agreements.
@@ -208,17 +208,17 @@ def test_filtered_attributes(_create_entries):
         2. Success
         3. Success
     """
-    # Add a new entry to MASTER1.
-    users = UserAccounts(MASTER1, DEFAULT_SUFFIX)
+    # Add a new entry to SUPPLIER1.
+    users = UserAccounts(SUPPLIER1, DEFAULT_SUFFIX)
     _create_users(users, 'Anuj Borah', 'aborah', 'Anuj', 'People', 'ok',
                   'aborah', '[email protected]', '+1121', '+121', '2121')
     check_all_replicated()
-    for instance in [MASTER1, MASTER2, CONSUMER1, CONSUMER2]:
+    for instance in [SUPPLIER1, SUPPLIER2, CONSUMER1, CONSUMER2]:
         assert UserAccount(instance,
                            f'uid=aborah,'
                            f'ou=People,{DEFAULT_SUFFIX}').get_attr_val_utf8('mail') == \
                '[email protected]'
-    for instance in [MASTER1, MASTER2]:
+    for instance in [SUPPLIER1, SUPPLIER2]:
         assert UserAccount(instance,
                            f'uid=aborah,'
                            f'ou=People,{DEFAULT_SUFFIX}').get_attr_val_utf8('roomnumber') == '2121'
@@ -238,9 +238,9 @@ def test_fewer_changes_in_single_operation(_create_entries):
     The primary test is that all servers are still alive.
 
     :id: 0d1d6218-38eb-11ea-8945-8c16451d917b
-    :setup: Master and Consumer
+    :setup: Supplier and Consumer
     :steps:
-        1. Add a new entry to MASTER1.
+        1. Add a new entry to SUPPLIER1.
         2. Fewer changes (but more than one) in a single operation to fractionally
            replicated attributes than the number of fractionally replicated attributes.
         3. All servers are still alive.
@@ -249,7 +249,7 @@ def test_fewer_changes_in_single_operation(_create_entries):
         2. Success
         3. Success
     """
-    users = UserAccounts(MASTER1, DEFAULT_SUFFIX)
+    users = UserAccounts(SUPPLIER1, DEFAULT_SUFFIX)
     user = _create_users(users, 'Anuj Borah1', 'aborah1', 'Anuj1', 'People',
                          'ok1', 'aborah1', '[email protected]', '+11212', '+1212', '21231')
     check_all_replicated()
@@ -260,7 +260,7 @@ def test_fewer_changes_in_single_operation(_create_entries):
     user.replace_many(('mail', '[email protected]'), ('sn', 'Oak'), ('l', 'NewPlace'))
     check_all_replicated()
     # All servers are still alive.
-    for ints in [MASTER1, MASTER2, CONSUMER1, CONSUMER2]:
+    for ints in [SUPPLIER1, SUPPLIER2, CONSUMER1, CONSUMER2]:
         assert UserAccount(ints, user.dn).get_attr_val_utf8('mail') == '[email protected]'
         assert UserAccount(ints, user.dn).get_attr_val_utf8('sn') == 'Oak'
 
@@ -268,17 +268,17 @@ def test_fewer_changes_in_single_operation(_create_entries):
 @pytest.fixture(scope="function")
 def _add_user_clean(request):
     # Enabling memberOf plugin and then adding few groups with member attributes.
-    MemberOfPlugin(MASTER1).enable()
-    for instance in (MASTER1, MASTER2):
+    MemberOfPlugin(SUPPLIER1).enable()
+    for instance in (SUPPLIER1, SUPPLIER2):
         instance.restart()
-    user1 = UserAccounts(MASTER1, DEFAULT_SUFFIX).create_test_user()
+    user1 = UserAccounts(SUPPLIER1, DEFAULT_SUFFIX).create_test_user()
     for attribute, value in [("displayName", "Anuj Borah"),
                              ("givenName", "aborah"),
                              ("telephoneNumber", "+1 555 999 333"),
                              ("roomnumber", "123"),
                              ("manager", f'uid=dsmith,ou=People,{DEFAULT_SUFFIX}')]:
         user1.set(attribute, value)
-    grp = Groups(MASTER1, DEFAULT_SUFFIX).create(properties={
+    grp = Groups(SUPPLIER1, DEFAULT_SUFFIX).create(properties={
         "cn": "bug739172_01group",
         "member": f'uid=test_user_1000,ou=People,{DEFAULT_SUFFIX}'
     })
@@ -297,7 +297,7 @@ def test_newly_added_attribute_nsds5replicatedattributelisttotal(_create_entries
     """This test case is to test the newly added attribute nsds5replicatedattributelistTotal.
 
     :id: 2df5971c-38eb-11ea-9e8e-8c16451d917b
-    :setup: Master and Consumer
+    :setup: Supplier and Consumer
     :steps:
         1. Enabling memberOf plugin and then adding few groups with member attributes.
         2. No memberOf plugin enabled on read only replicas
@@ -310,7 +310,7 @@ def test_newly_added_attribute_nsds5replicatedattributelisttotal(_create_entries
     """
     check_all_replicated()
     user = f'uid=test_user_1000,ou=People,{DEFAULT_SUFFIX}'
-    for instance in (MASTER1, MASTER2, CONSUMER1, CONSUMER2):
+    for instance in (SUPPLIER1, SUPPLIER2, CONSUMER1, CONSUMER2):
         assert Groups(instance, DEFAULT_SUFFIX).list()[1].get_attr_val_utf8("member") == user
         assert UserAccount(instance, user).get_attr_val_utf8("sn") == "test_user_1000"
     # The attributes mentioned in the nsds5replicatedattributelist
@@ -325,9 +325,9 @@ def test_attribute_nsds5replicatedattributelisttotal(_create_entries, _add_user_
     """This test case is to test the newly added attribute nsds5replicatedattributelistTotal.
 
     :id: 35de9ff0-38eb-11ea-b420-8c16451d917b
-    :setup: Master and Consumer
+    :setup: Supplier and Consumer
     :steps:
-        1. Add a new entry to MASTER1.
+        1. Add a new entry to SUPPLIER1.
         2. Enabling memberOf plugin and then adding few groups with member attributes.
         3. No memberOf plugin enabled in other consumers,ie., the read only replicas
            won't get incremental updates for the attributes mentioned in the list.
@@ -340,12 +340,12 @@ def test_attribute_nsds5replicatedattributelisttotal(_create_entries, _add_user_
     """
     # Run total update and verify the same attributes added/modified in the read-only replicas.
     user = f'uid=test_user_1000,ou=People,{DEFAULT_SUFFIX}'
-    for agreement in Agreements(MASTER1).list():
+    for agreement in Agreements(SUPPLIER1).list():
         agreement.begin_reinit()
         agreement.wait_reinit()
     check_all_replicated()
-    for instance in (MASTER1, MASTER2):
-        assert Groups(MASTER1, DEFAULT_SUFFIX).list()[1].get_attr_val_utf8("member") == user
+    for instance in (SUPPLIER1, SUPPLIER2):
+        assert Groups(SUPPLIER1, DEFAULT_SUFFIX).list()[1].get_attr_val_utf8("member") == user
         assert UserAccount(instance, user).get_attr_val_utf8("sn") == "test_user_1000"
     for instance in (CONSUMER1, CONSUMER2):
         for value in ("memberOf", "manager", "sn"):
@@ -361,9 +361,9 @@ def test_implicit_replication_of_password_policy(_create_entries):
     modify operation
 
     :id: 3f4affe8-38eb-11ea-8936-8c16451d917b
-    :setup: Master and Consumer
+    :setup: Supplier and Consumer
     :steps:
-        1. Add a new entry to MASTER1.
+        1. Add a new entry to SUPPLIER1.
         2. Try binding user with correct password
         3. Try binding user with incorrect password (twice)
         4. Make sure user got locked
@@ -377,25 +377,25 @@ def test_implicit_replication_of_password_policy(_create_entries):
     """
     for attribute, value in [("passwordlockout", "on"),
                              ("passwordmaxfailure", "1")]:
-        Config(MASTER1).set(attribute, value)
-    user = UserAccounts(MASTER1, DEFAULT_SUFFIX).create_test_user()
+        Config(SUPPLIER1).set(attribute, value)
+    user = UserAccounts(SUPPLIER1, DEFAULT_SUFFIX).create_test_user()
     user.set("userpassword", "ItsmeAnuj")
     check_all_replicated()
-    assert UserAccount(MASTER2, user.dn).get_attr_val_utf8("uid") == "test_user_1000"
+    assert UserAccount(SUPPLIER2, user.dn).get_attr_val_utf8("uid") == "test_user_1000"
     # Try binding user with correct password
-    conn = UserAccount(MASTER2, user.dn).bind("ItsmeAnuj")
+    conn = UserAccount(SUPPLIER2, user.dn).bind("ItsmeAnuj")
     with pytest.raises(ldap.INVALID_CREDENTIALS):
-        UserAccount(MASTER1, user.dn).bind("badpass")
+        UserAccount(SUPPLIER1, user.dn).bind("badpass")
     with pytest.raises(ldap.CONSTRAINT_VIOLATION):
-        UserAccount(MASTER1, user.dn).bind("badpass")
+        UserAccount(SUPPLIER1, user.dn).bind("badpass")
     # asserting user got locked
     with pytest.raises(ldap.CONSTRAINT_VIOLATION):
-        conn = UserAccount(MASTER1, user.dn).bind("ItsmeAnuj")
+        conn = UserAccount(SUPPLIER1, user.dn).bind("ItsmeAnuj")
     check_all_replicated()
     # modify user and verify that replication is still working
     user.replace("seealso", "cn=seealso")
     check_all_replicated()
-    for instance in (MASTER1, MASTER2):
+    for instance in (SUPPLIER1, SUPPLIER2):
         assert UserAccount(instance, user.dn).get_attr_val_utf8("seealso") == "cn=seealso"
 
 

+ 33 - 33
dirsrvtests/tests/suites/gssapi_repl/gssapi_repl_test.py

@@ -31,8 +31,8 @@ log = logging.getLogger(__name__)
 
 REALM = "EXAMPLE.COM"
 
-HOST_MASTER_1 = 'ldapkdc1.example.com'
-HOST_MASTER_2 = 'ldapkdc2.example.com'
+HOST_SUPPLIER_1 = 'ldapkdc1.example.com'
+HOST_SUPPLIER_2 = 'ldapkdc2.example.com'
 
 
 def _create_machine_ou(inst):
@@ -70,15 +70,15 @@ def _allow_machine_account(inst, name):
 
 
 def test_gssapi_repl(topology_m2):
-    """Test gssapi authenticated replication agreement of two masters using KDC
+    """Test gssapi authenticated replication agreement of two suppliers using KDC
 
     :id: 552850aa-afc3-473e-9c39-aae802b46f11
 
-    :setup: MMR with two masters
+    :setup: MMR with two suppliers
 
     :steps:
-         1. Create the locations on each master for the other master to bind to
-         2. Set on the cn=replica config to accept the other masters mapping under mapping tree
+         1. Create the locations on each supplier for the other supplier to bind to
+         2. Set on the cn=replica config to accept the other suppliers mapping under mapping tree
          3. Create the replication agreements from M1->M2 and vice versa (M2->M1)
          4. Set the replica bind method to sasl gssapi for both agreements
          5. Initialize all the agreements
@@ -96,46 +96,46 @@ def test_gssapi_repl(topology_m2):
     """
 
     return
-    master1 = topology_m2.ms["master1"]
-    master2 = topology_m2.ms["master2"]
+    supplier1 = topology_m2.ms["supplier1"]
+    supplier2 = topology_m2.ms["supplier2"]
 
-    # Create the locations on each master for the other to bind to.
-    _create_machine_ou(master1)
-    _create_machine_ou(master2)
+    # Create the locations on each supplier for the other to bind to.
+    _create_machine_ou(supplier1)
+    _create_machine_ou(supplier2)
 
-    _create_machine_account(master1, 'ldap/%s' % HOST_MASTER_1)
-    _create_machine_account(master1, 'ldap/%s' % HOST_MASTER_2)
-    _create_machine_account(master2, 'ldap/%s' % HOST_MASTER_1)
-    _create_machine_account(master2, 'ldap/%s' % HOST_MASTER_2)
+    _create_machine_account(supplier1, 'ldap/%s' % HOST_SUPPLIER_1)
+    _create_machine_account(supplier1, 'ldap/%s' % HOST_SUPPLIER_2)
+    _create_machine_account(supplier2, 'ldap/%s' % HOST_SUPPLIER_1)
+    _create_machine_account(supplier2, 'ldap/%s' % HOST_SUPPLIER_2)
 
-    # Set on the cn=replica config to accept the other masters princ mapping under mapping tree
-    _allow_machine_account(master1, 'ldap/%s' % HOST_MASTER_2)
-    _allow_machine_account(master2, 'ldap/%s' % HOST_MASTER_1)
+    # Set on the cn=replica config to accept the other suppliers princ mapping under mapping tree
+    _allow_machine_account(supplier1, 'ldap/%s' % HOST_SUPPLIER_2)
+    _allow_machine_account(supplier2, 'ldap/%s' % HOST_SUPPLIER_1)
 
     #
     # Create all the agreements
     #
-    # Creating agreement from master 1 to master 2
+    # Creating agreement from supplier 1 to supplier 2
 
     # Set the replica bind method to sasl gssapi
     properties = {RA_NAME: r'meTo_$host:$port',
                   RA_METHOD: 'SASL/GSSAPI',
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m1_m2_agmt = master1.agreement.create(suffix=SUFFIX, host=master2.host, port=master2.port, properties=properties)
+    m1_m2_agmt = supplier1.agreement.create(suffix=SUFFIX, host=supplier2.host, port=supplier2.port, properties=properties)
     if not m1_m2_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m1_m2_agmt)
 
-    # Creating agreement from master 2 to master 1
+    # Creating agreement from supplier 2 to supplier 1
 
     # Set the replica bind method to sasl gssapi
     properties = {RA_NAME: r'meTo_$host:$port',
                   RA_METHOD: 'SASL/GSSAPI',
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
-    m2_m1_agmt = master2.agreement.create(suffix=SUFFIX, host=master1.host, port=master1.port, properties=properties)
+    m2_m1_agmt = supplier2.agreement.create(suffix=SUFFIX, host=supplier1.host, port=supplier1.port, properties=properties)
     if not m2_m1_agmt:
-        log.fatal("Fail to create a master -> master replica agreement")
+        log.fatal("Fail to create a supplier -> supplier replica agreement")
         sys.exit(1)
     log.debug("%s created" % m2_m1_agmt)
 
@@ -145,26 +145,26 @@ def test_gssapi_repl(topology_m2):
     #
     # Initialize all the agreements
     #
-    master1.agreement.init(SUFFIX, HOST_MASTER_2, PORT_MASTER_2)
-    master1.waitForReplInit(m1_m2_agmt)
+    supplier1.agreement.init(SUFFIX, HOST_SUPPLIER_2, PORT_SUPPLIER_2)
+    supplier1.waitForReplInit(m1_m2_agmt)
 
     # Check replication is working...
-    if master1.testReplication(DEFAULT_SUFFIX, master2):
+    if supplier1.testReplication(DEFAULT_SUFFIX, supplier2):
         log.info('Replication is working.')
     else:
         log.fatal('Replication is not working.')
         assert False
 
-    # Add a user to master 1
-    _create_machine_account(master1, 'http/one.example.com')
+    # Add a user to supplier 1
+    _create_machine_account(supplier1, 'http/one.example.com')
     # Check it's on 2
     time.sleep(5)
-    assert (_check_machine_account(master2, 'http/one.example.com'))
-    # Add a user to master 2
-    _create_machine_account(master2, 'http/two.example.com')
+    assert (_check_machine_account(supplier2, 'http/one.example.com'))
+    # Add a user to supplier 2
+    _create_machine_account(supplier2, 'http/two.example.com')
     # Check it's on 1
     time.sleep(5)
-    assert (_check_machine_account(master2, 'http/two.example.com'))
+    assert (_check_machine_account(supplier2, 'http/two.example.com'))
 
 
 if __name__ == '__main__':

+ 12 - 12
dirsrvtests/tests/suites/healthcheck/health_repl_test.py

@@ -96,8 +96,8 @@ def test_healthcheck_replication_replica_not_reachable(topology_m2):
 
     RET_CODE = 'DSREPLLE0005'
 
-    M1 = topology_m2.ms['master1']
-    M2 = topology_m2.ms['master2']
+    M1 = topology_m2.ms['supplier1']
+    M2 = topology_m2.ms['supplier2']
 
     set_changelog_trimming(M1)
 
@@ -146,7 +146,7 @@ def test_healthcheck_changelog_trimming_not_configured(topology_m2):
         7. Healthcheck reports no issue found (json)
     """
 
-    M1 = topology_m2.ms['master1']
+    M1 = topology_m2.ms['supplier1']
 
     RET_CODE = 'DSCLLE0001'
 
@@ -190,8 +190,8 @@ def test_healthcheck_replication_presence_of_conflict_entries(topology_m2):
 
     RET_CODE = 'DSREPLLE0002'
 
-    M1 = topology_m2.ms['master1']
-    M2 = topology_m2.ms['master2']
+    M1 = topology_m2.ms['supplier1']
+    M2 = topology_m2.ms['supplier2']
 
     repl = ReplicationManager(DEFAULT_SUFFIX)
     repl.wait_for_replication(M1, M2)
@@ -226,7 +226,7 @@ def test_healthcheck_non_replicated_suffixes(topology_m2):
         2. Success
     """
 
-    inst = topology_m2.ms['master1']
+    inst = topology_m2.ms['supplier1']
 
     # Create second suffix
     backends = Backends(inst)
@@ -255,7 +255,7 @@ def test_healthcheck_replication_out_of_sync_broken(topology_m3):
     :id: b5ae7cae-de0f-4206-95a4-f81538764bea
     :setup: 3 MMR topology
     :steps:
-        1. Create a 3 masters full-mesh topology, on M2 and M3 don’t set nsds5BeginReplicaRefresh:start
+        1. Create a 3 suppliers full-mesh topology, on M2 and M3 don’t set nsds5BeginReplicaRefresh:start
         2. Perform modifications on M1
         3. Use HealthCheck without --json option
         4. Use HealthCheck with --json option
@@ -268,11 +268,11 @@ def test_healthcheck_replication_out_of_sync_broken(topology_m3):
 
     RET_CODE = 'DSREPLLE0001'
 
-    M1 = topology_m3.ms['master1']
-    M2 = topology_m3.ms['master2']
-    M3 = topology_m3.ms['master3']
+    M1 = topology_m3.ms['supplier1']
+    M2 = topology_m3.ms['supplier2']
+    M3 = topology_m3.ms['supplier3']
 
-    log.info('Break master2 and master3')
+    log.info('Break supplier2 and supplier3')
     replicas = Replicas(M2)
     replica = replicas.list()[0]
     replica.replace('nsds5ReplicaBindDNGroup', 'cn=repl')
@@ -281,7 +281,7 @@ def test_healthcheck_replication_out_of_sync_broken(topology_m3):
     replica = replicas.list()[0]
     replica.replace('nsds5ReplicaBindDNGroup', 'cn=repl')
 
-    log.info('Perform update on master1')
+    log.info('Perform update on supplier1')
     test_users_m1 = UserAccounts(M1, DEFAULT_SUFFIX)
     test_users_m1.create_test_user(1005, 2000)
 

+ 6 - 6
dirsrvtests/tests/suites/healthcheck/health_sync_test.py

@@ -67,7 +67,7 @@ def test_healthcheck_replication_out_of_sync_not_broken(topology_m3):
     :id: 8305000d-ba4d-4c00-8331-be0e8bd92150
     :setup: 3 MMR topology
     :steps:
-        1. Create a 3 masters full-mesh topology, all replicas being synchronized
+        1. Create a 3 suppliers full-mesh topology, all replicas being synchronized
         2. Stop M1
         3. Perform an update on M2 and M3.
         4. Check M2 and M3 are synchronized.
@@ -92,14 +92,14 @@ def test_healthcheck_replication_out_of_sync_not_broken(topology_m3):
 
     RET_CODE = 'DSREPLLE0003'
 
-    M1 = topology_m3.ms['master1']
-    M2 = topology_m3.ms['master2']
-    M3 = topology_m3.ms['master3']
+    M1 = topology_m3.ms['supplier1']
+    M2 = topology_m3.ms['supplier2']
+    M3 = topology_m3.ms['supplier3']
 
-    log.info('Stop master1')
+    log.info('Stop supplier1')
     M1.stop()
 
-    log.info('Perform update on master2 and master3')
+    log.info('Perform update on supplier2 and supplier3')
     test_users_m2 = UserAccounts(M2, DEFAULT_SUFFIX)
     test_users_m3 = UserAccounts(M3, DEFAULT_SUFFIX)
     test_users_m2.create_test_user(1000, 2000)

+ 10 - 10
dirsrvtests/tests/suites/healthcheck/healthcheck_test.py

@@ -289,7 +289,7 @@ def test_healthcheck_replication(topology_m2):
     :id: 9ee6d491-d6d7-4c2c-ac78-70d08f054166
     :setup: 2 MM topology
     :steps:
-        1. Create a two masters replication topology
+        1. Create a two suppliers replication topology
         2. Set nsslapd-changelogmaxage to 30d
         3. Use HealthCheck without --json option
         4. Use HealthCheck with --json option
@@ -300,18 +300,18 @@ def test_healthcheck_replication(topology_m2):
         4. Success
     """
 
-    M1 = topology_m2.ms['master1']
-    M2 = topology_m2.ms['master2']
+    M1 = topology_m2.ms['supplier1']
+    M2 = topology_m2.ms['supplier2']
 
     # If we don't set changelog trimming, we will get error DSCLLE0001
     set_changelog_trimming(M1)
     set_changelog_trimming(M2)
 
-    log.info('Run healthcheck for master1')
+    log.info('Run healthcheck for supplier1')
     run_healthcheck_and_flush_log(topology_m2, M1, CMD_OUTPUT, json=False)
     run_healthcheck_and_flush_log(topology_m2, M1, JSON_OUTPUT, json=True)
 
-    log.info('Run healthcheck for master2')
+    log.info('Run healthcheck for supplier2')
     run_healthcheck_and_flush_log(topology_m2, M2, CMD_OUTPUT, json=False)
     run_healthcheck_and_flush_log(topology_m2, M2, JSON_OUTPUT, json=True)
 
@@ -325,7 +325,7 @@ def test_healthcheck_replication_tls(topology_m2):
     :id: 9ee6d491-d6d7-4c2c-ac78-70d08f054166
     :setup: 2 MM topology
     :steps:
-        1. Create a two masters replication topology
+        1. Create a two suppliers replication topology
         2. Enable TLS
         3. Set nsslapd-changelogmaxage to 30d
         4. Use HealthCheck without --json option
@@ -338,17 +338,17 @@ def test_healthcheck_replication_tls(topology_m2):
         5. Success
     """
 
-    M1 = topology_m2.ms['master1']
-    M2 = topology_m2.ms['master2']
+    M1 = topology_m2.ms['supplier1']
+    M2 = topology_m2.ms['supplier2']
 
     M1.enable_tls()
     M2.enable_tls()
 
-    log.info('Run healthcheck for master1')
+    log.info('Run healthcheck for supplier1')
     run_healthcheck_and_flush_log(topology_m2, M1, CMD_OUTPUT, json=False)
     run_healthcheck_and_flush_log(topology_m2, M1, JSON_OUTPUT, json=True)
 
-    log.info('Run healthcheck for master2')
+    log.info('Run healthcheck for supplier2')
     run_healthcheck_and_flush_log(topology_m2, M2, CMD_OUTPUT, json=False)
     run_healthcheck_and_flush_log(topology_m2, M2, JSON_OUTPUT, json=True)
 

+ 4 - 4
dirsrvtests/tests/suites/lib389/idm/user_compare_m2Repl_test.py

@@ -9,11 +9,11 @@ pytestmark = pytest.mark.tier1
 
 def test_user_compare_m2Repl(topology_m2):
     """
-    User compare test between users of master to master replicaton topology.
+    User compare test between users of supplier to supplier replicaton topology.
 
     :id: 7c243bea-4075-4304-864d-5b789d364871
 
-    :setup: 2 master MMR
+    :setup: 2 supplier MMR
 
     :steps: 1. Add a user to m1
             2. Wait for replication
@@ -24,8 +24,8 @@ def test_user_compare_m2Repl(topology_m2):
                       3. The user is the same
     """
     rm = ReplicationManager(DEFAULT_SUFFIX)
-    m1 = topology_m2.ms.get('master1')
-    m2 = topology_m2.ms.get('master2')
+    m1 = topology_m2.ms.get('supplier1')
+    m2 = topology_m2.ms.get('supplier2')
 
     m1_users = UserAccounts(m1, DEFAULT_SUFFIX)
     m2_users = UserAccounts(m2, DEFAULT_SUFFIX)

+ 2 - 2
dirsrvtests/tests/suites/mapping_tree/be_del_and_default_naming_attr_test.py

@@ -24,7 +24,7 @@ def test_be_delete(topo):
     context should also be updated to reflect the next available suffix
 
     :id: 5208f897-7c95-4925-bad0-9ceb95fee678
-    :setup: Master Instance
+    :setup: Supplier Instance
     :steps:
         1. Create second backend/suffix
         2. Add an encrypted attribute to the default suffix
@@ -47,7 +47,7 @@ def test_be_delete(topo):
         9. Success
     """
     
-    inst = topo.ms["master1"] 
+    inst = topo.ms["supplier1"] 
     
     # Create second suffix      
     backends = Backends(inst)

+ 14 - 14
dirsrvtests/tests/suites/mapping_tree/referral_during_tot_init_test.py

@@ -20,36 +20,36 @@ pytestmark = pytest.mark.tier1
 @pytest.mark.skipif(ds_is_older("1.4.0.0"), reason="Not implemented")
 def test_referral_during_tot(topology_m2):
 
-    master1 = topology_m2.ms["master1"]
-    master2 = topology_m2.ms["master2"]
+    supplier1 = topology_m2.ms["supplier1"]
+    supplier2 = topology_m2.ms["supplier2"]
 
-    users = UserAccounts(master2, DEFAULT_SUFFIX)
+    users = UserAccounts(supplier2, DEFAULT_SUFFIX)
     u = users.create(properties=TEST_USER_PROPERTIES)
     u.set('userPassword', 'password')
     binddn = u.dn
     bindpw = 'password'
 
-    # Create a bunch of entries on master1
-    ldif_dir = master1.get_ldif_dir()
+    # Create a bunch of entries on supplier1
+    ldif_dir = supplier1.get_ldif_dir()
     import_ldif = ldif_dir + '/ref_during_tot_import.ldif'
-    dbgen_users(master1, 10000, import_ldif, DEFAULT_SUFFIX)
+    dbgen_users(supplier1, 10000, import_ldif, DEFAULT_SUFFIX)
 
-    master1.stop()
-    master1.ldif2db(bename=None, excludeSuffixes=None, encrypt=False, suffixes=[DEFAULT_SUFFIX], import_file=import_ldif)
-    master1.start()
+    supplier1.stop()
+    supplier1.ldif2db(bename=None, excludeSuffixes=None, encrypt=False, suffixes=[DEFAULT_SUFFIX], import_file=import_ldif)
+    supplier1.start()
     # Recreate the user on m1 also, so that if the init finishes first ew don't lose the user on m2
-    users = UserAccounts(master1, DEFAULT_SUFFIX)
+    users = UserAccounts(supplier1, DEFAULT_SUFFIX)
     u = users.create(properties=TEST_USER_PROPERTIES)
     u.set('userPassword', 'password')
-    # Now export them to master2
-    agmts = Agreements(master1)
+    # Now export them to supplier2
+    agmts = Agreements(supplier1)
     agmts.list()[0].begin_reinit()
 
-    # While that's happening try to bind as a user to master 2
+    # While that's happening try to bind as a user to supplier 2
     # This should trigger the referral code.
     referred = False
     for i in range(0, 100):
-        conn = ldap.initialize(master2.toLDAPURL())
+        conn = ldap.initialize(supplier2.toLDAPURL())
         conn.set_option(ldap.OPT_REFERRALS, False)
         try:
             conn.simple_bind_s(binddn, bindpw)

+ 7 - 7
dirsrvtests/tests/suites/memberof_plugin/regression_test.py

@@ -47,7 +47,7 @@ def add_users(topo_m2, users_num, suffix):
     Return the list of added user DNs.
     """
     users_list = []
-    users = UserAccounts(topo_m2.ms["master1"], suffix, rdn=None)
+    users = UserAccounts(topo_m2.ms["supplier1"], suffix, rdn=None)
     log.info('Adding %d users' % users_num)
     for num in sample(list(range(1000)), users_num):
         num_ran = int(round(num))
@@ -103,7 +103,7 @@ def test_memberof_with_repl(topo):
     """Test that we allowed to enable MemberOf plugin in dedicated consumer
 
     :id: ef71cd7c-e792-41bf-a3c0-b3b38391cbe5
-    :setup: 1 Master - 1 Hub - 1 Consumer
+    :setup: 1 Supplier - 1 Hub - 1 Consumer
     :steps:
         1. Configure replication to EXCLUDE memberof
         2. Enable memberof plugin
@@ -146,7 +146,7 @@ def test_memberof_with_repl(topo):
         19. user_0 should be memberof group_0 on M,H,C
     """
 
-    M1 = topo.ms["master1"]
+    M1 = topo.ms["supplier1"]
     H1 = topo.hs["hub1"]
     C1 = topo.cs["consumer1"]
 
@@ -297,7 +297,7 @@ def test_scheme_violation_errors_logged(topo_m2):
         6. Errors should be logged
     """
 
-    inst = topo_m2.ms["master1"]
+    inst = topo_m2.ms["supplier1"]
     memberof = MemberOfPlugin(inst)
     memberof.enable()
     memberof.set_autoaddoc('nsMemberOf')
@@ -331,7 +331,7 @@ def test_memberof_with_changelog_reset(topo_m2):
     """Test that replication does not break, after DS stop-start, due to changelog reset
 
     :id: 60c11636-55a1-4704-9e09-2c6bcc828de4
-    :setup: 2 Masters
+    :setup: 2 Suppliers
     :steps:
         1. On M1 and M2, Enable memberof
         2. On M1, add 999 entries allowing memberof
@@ -348,8 +348,8 @@ def test_memberof_with_changelog_reset(topo_m2):
         4. M1 should be stopped
         5. Replication should be working fine
     """
-    m1 = topo_m2.ms["master1"]
-    m2 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier1"]
+    m2 = topo_m2.ms["supplier2"]
 
     log.info("Configure memberof on M1 and M2")
     memberof = MemberOfPlugin(m1)

+ 31 - 31
dirsrvtests/tests/suites/memory_leaks/MMR_double_free_test.py

@@ -33,12 +33,12 @@ ds_paths = Paths()
 def topology_setup(topology_m2):
     """Configure the topology with purge parameters and enable audit logging
 
-        - configure replica purge delay and interval on master1 and master2
-        - enable audit log on master1 and master2
-        - restart master1 and master2
+        - configure replica purge delay and interval on supplier1 and supplier2
+        - enable audit log on supplier1 and supplier2
+        - restart supplier1 and supplier2
     """
-    m1 = topology_m2.ms["master1"]
-    m2 = topology_m2.ms["master2"]
+    m1 = topology_m2.ms["supplier1"]
+    m2 = topology_m2.ms["supplier2"]
 
     replica1 = Replicas(m1).get(DEFAULT_SUFFIX)
     replica2 = Replicas(m2).get(DEFAULT_SUFFIX)
@@ -75,33 +75,33 @@ def test_MMR_double_free(topology_m2, topology_setup, timeout=5):
     :id: 91580b1c-ad10-49bc-8aed-402edac59f46 
     :setup: replicated topology - purge delay and purge interval are configured
     :steps:
-        1. create an entry on master1
+        1. create an entry on supplier1
         2. modify the entry with description add
-        3. check the entry is correctly replicated on master2
-        4. stop master2
-        5. delete the entry's description on master1
-        6. stop master1
-        7. start master2
-        8. delete the entry's description on master2
-        9. add an entry's description on master2
+        3. check the entry is correctly replicated on supplier2
+        4. stop supplier2
+        5. delete the entry's description on supplier1
+        6. stop supplier1
+        7. start supplier2
+        8. delete the entry's description on supplier2
+        9. add an entry's description on supplier2
         10. wait the purge delay duration
-        11. add again an entry's description on master2
+        11. add again an entry's description on supplier2
     :expectedresults:
-        1. entry exists on master1
+        1. entry exists on supplier1
         2. modification is effective 
-        3. entry exists on master2 and modification is effective
-        4. master2 is stopped
-        5. description is removed from entry on master1
-        6. master1 is stopped
-        7. master2 is started - not synchronized with master1
-        8. description is removed from entry on master2 (same op should be performed too by replication mecanism)
-        9. description to entry is added on master2
+        3. entry exists on supplier2 and modification is effective
+        4. supplier2 is stopped
+        5. description is removed from entry on supplier1
+        6. supplier1 is stopped
+        7. supplier2 is started - not synchronized with supplier1
+        8. description is removed from entry on supplier2 (same op should be performed too by replication mecanism)
+        9. description to entry is added on supplier2
         10. Purge delay has expired - changes are erased 
-        11.  description to entry is added again on master2
+        11.  description to entry is added again on supplier2
     """
     name = 'test_entry'
 
-    entry_m1 = UserAccounts(topology_m2.ms["master1"], DEFAULT_SUFFIX)
+    entry_m1 = UserAccounts(topology_m2.ms["supplier1"], DEFAULT_SUFFIX)
     entry = entry_m1.create(properties={
         'uid': name,
         'sn': name,
@@ -116,7 +116,7 @@ def test_MMR_double_free(topology_m2, topology_setup, timeout=5):
     entry.add('description', '5')
 
     log.info('Check the update in the replicated entry')
-    entry_m2 = UserAccounts(topology_m2.ms["master2"], DEFAULT_SUFFIX)
+    entry_m2 = UserAccounts(topology_m2.ms["supplier2"], DEFAULT_SUFFIX)
 
     success = 0
     for i in range(0, timeout):
@@ -132,16 +132,16 @@ def test_MMR_double_free(topology_m2, topology_setup, timeout=5):
     assert success
 
     log.info('Stop M2 so that it will not receive the next update')
-    topology_m2.ms["master2"].stop(10)
+    topology_m2.ms["supplier2"].stop(10)
 
     log.info('Perform a del operation that is not replicated')
     entry.remove('description', '5')
 
-    log.info("Stop M1 so that it will keep del '5' that is unknown from master2")
-    topology_m2.ms["master1"].stop(10)
+    log.info("Stop M1 so that it will keep del '5' that is unknown from supplier2")
+    topology_m2.ms["supplier1"].stop(10)
 
     log.info('start M2 to do the next updates')
-    topology_m2.ms["master2"].start()
+    topology_m2.ms["supplier2"].start()
 
     log.info("del 'description' by '5'")
     entry_repl.remove('description', '5')
@@ -155,8 +155,8 @@ def test_MMR_double_free(topology_m2, topology_setup, timeout=5):
     log.info("add 'description' by '6' that purge the state info")
     entry_repl.add('description', '6')
      
-    log.info('Restart master1')
-    topology_m2.ms["master1"].start(30)
+    log.info('Restart supplier1')
+    topology_m2.ms["supplier1"].start(30)
 
 
 if __name__ == '__main__':

+ 5 - 5
dirsrvtests/tests/suites/password/regression_test.py

@@ -10,7 +10,7 @@ import time
 from lib389._constants import PASSWORD, DN_DM, DEFAULT_SUFFIX
 from lib389._constants import SUFFIX, PASSWORD, DN_DM, DN_CONFIG, PLUGIN_RETRO_CHANGELOG, DEFAULT_SUFFIX, DEFAULT_CHANGELOG_DB, DEFAULT_BENAME
 from lib389 import Entry
-from lib389.topologies import topology_m1 as topo_master
+from lib389.topologies import topology_m1 as topo_supplier
 from lib389.idm.user import UserAccounts
 from lib389.utils import ldap, os, logging, ensure_bytes, ds_is_newer, ds_supports_new_changelog
 from lib389.topologies import topology_st as topo
@@ -220,13 +220,13 @@ def test_global_vs_local(topo, passw_policy, create_user, user_pasw):
     create_user.set('userPassword', PASSWORD)
 
 @pytest.mark.ds49789
-def test_unhashed_pw_switch(topo_master):
+def test_unhashed_pw_switch(topo_supplier):
     """Check that nsslapd-unhashed-pw-switch works corrently
 
     :id: e5aba180-d174-424d-92b0-14fe7bb0b92a
-    :setup: Master Instance
+    :setup: Supplier Instance
     :steps:
-        1. A Master is created, enable retrocl (not  used here)
+        1. A Supplier is created, enable retrocl (not  used here)
         2. Create a set of users
         3. update userpassword of user1 and check that unhashed#user#password is not logged (default)
         4. udpate userpassword of user2 and check that unhashed#user#password is not logged ('nolog')
@@ -241,7 +241,7 @@ def test_unhashed_pw_switch(topo_master):
     MAX_USERS = 10
     PEOPLE_DN = ("ou=people," + DEFAULT_SUFFIX)
 
-    inst = topo_master.ms["master1"]
+    inst = topo_supplier.ms["supplier1"]
     inst.modify_s("cn=Retro Changelog Plugin,cn=plugins,cn=config",
                                         [(ldap.MOD_REPLACE, 'nsslapd-changelogmaxage', b'2m'),
                                          (ldap.MOD_REPLACE, 'nsslapd-changelog-trim-interval', b"5s"),

+ 1 - 1
dirsrvtests/tests/suites/plugins/entryusn_test.py

@@ -198,7 +198,7 @@ def test_entryusn_after_repl_delete(topology_m2):
         4. Success
     """
 
-    inst = topology_m2.ms["master1"]
+    inst = topology_m2.ms["supplier1"]
     plugin = USNPlugin(inst)
     plugin.enable()
     inst.restart()

+ 4 - 4
dirsrvtests/tests/suites/referint_plugin/rename_test.py

@@ -89,8 +89,8 @@ def test_rename_large_subtree(topology_m2):
         4. The rename operation of ou=s1 succeeds
     """
 
-    st = topology_m2.ms["master1"]
-    m2 = topology_m2.ms["master2"]
+    st = topology_m2.ms["supplier1"]
+    m2 = topology_m2.ms["supplier2"]
 
     # Create a default group
     gps = Groups(st, DEFAULT_SUFFIX)
@@ -132,7 +132,7 @@ def test_rename_large_subtree(topology_m2):
 
     # Pause replication
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    repl.disable_to_master(m2, [st, ])
+    repl.disable_to_supplier(m2, [st, ])
 
     # Create the users 1 -> UCOUNT in ou=s1
     nsu = nsUserAccounts(st, basedn=ou_s1.dn, rdn=None)
@@ -141,7 +141,7 @@ def test_rename_large_subtree(topology_m2):
 
     # Enable replication
 
-    repl.enable_to_master(m2, [st, ])
+    repl.enable_to_supplier(m2, [st, ])
 
     # Assert they are in the group as we expect
     members = group.get_attr_vals_utf8('member')

+ 1 - 1
dirsrvtests/tests/suites/replication/__init__.py

@@ -7,7 +7,7 @@ from lib389._constants import DEFAULT_SUFFIX
 
 
 def get_repl_entries(topo, entry_name, attr_list):
-    """Get a list of test entries from all masters"""
+    """Get a list of test entries from all suppliers"""
 
     entries_list = []
 

+ 84 - 84
dirsrvtests/tests/suites/replication/acceptance_test.py

@@ -36,11 +36,11 @@ log = logging.getLogger(__name__)
 
 @pytest.fixture(scope="function")
 def create_entry(topo_m4, request):
-    """Add test entry to master1"""
+    """Add test entry to supplier1"""
 
     log.info('Adding entry {}'.format(TEST_ENTRY_DN))
 
-    test_user = UserAccount(topo_m4.ms["master1"], TEST_ENTRY_DN)
+    test_user = UserAccount(topo_m4.ms["supplier1"], TEST_ENTRY_DN)
     if test_user.exists():
         log.info('Deleting entry {}'.format(TEST_ENTRY_DN))
         test_user.delete()
@@ -59,12 +59,12 @@ def new_suffix(topo_m4, request):
     """Add a new suffix and enable a replication on it"""
 
     for num in range(1, 5):
-        log.info('Adding suffix:{} and backend: {} to master{}'.format(NEW_SUFFIX, NEW_BACKEND, num))
-        topo_m4.ms["master{}".format(num)].backend.create(NEW_SUFFIX, {BACKEND_NAME: NEW_BACKEND})
-        topo_m4.ms["master{}".format(num)].mappingtree.create(NEW_SUFFIX, NEW_BACKEND)
+        log.info('Adding suffix:{} and backend: {} to supplier{}'.format(NEW_SUFFIX, NEW_BACKEND, num))
+        topo_m4.ms["supplier{}".format(num)].backend.create(NEW_SUFFIX, {BACKEND_NAME: NEW_BACKEND})
+        topo_m4.ms["supplier{}".format(num)].mappingtree.create(NEW_SUFFIX, NEW_BACKEND)
 
         try:
-            topo_m4.ms["master{}".format(num)].add_s(Entry((NEW_SUFFIX, {
+            topo_m4.ms["supplier{}".format(num)].add_s(Entry((NEW_SUFFIX, {
                 'objectclass': 'top',
                 'objectclass': 'organization',
                 'o': NEW_SUFFIX_NAME,
@@ -76,9 +76,9 @@ def new_suffix(topo_m4, request):
 
     def fin():
         for num in range(1, 5):
-            log.info('Deleting suffix:{} and backend: {} from master{}'.format(NEW_SUFFIX, NEW_BACKEND, num))
-            topo_m4.ms["master{}".format(num)].mappingtree.delete(NEW_SUFFIX)
-            topo_m4.ms["master{}".format(num)].backend.delete(NEW_SUFFIX)
+            log.info('Deleting suffix:{} and backend: {} from supplier{}'.format(NEW_SUFFIX, NEW_BACKEND, num))
+            topo_m4.ms["supplier{}".format(num)].mappingtree.delete(NEW_SUFFIX)
+            topo_m4.ms["supplier{}".format(num)].backend.delete(NEW_SUFFIX)
 
     request.addfinalizer(fin)
 
@@ -87,11 +87,11 @@ def test_add_entry(topo_m4, create_entry):
     """Check that entries are replicated after add operation
 
     :id: 024250f1-5f7e-4f3b-a9f5-27741e6fd405
-    :setup: Four masters replication setup, an entry
+    :setup: Four suppliers replication setup, an entry
     :steps:
-        1. Check entry on all other masters
+        1. Check entry on all other suppliers
     :expectedresults:
-        1. The entry should be replicated to all masters
+        1. The entry should be replicated to all suppliers
     """
 
     entries = get_repl_entries(topo_m4, TEST_ENTRY_NAME, ["uid"])
@@ -102,32 +102,32 @@ def test_modify_entry(topo_m4, create_entry):
     """Check that entries are replicated after modify operation
 
     :id: 36764053-622c-43c2-a132-d7a3ab7d9aaa
-    :setup: Four masters replication setup, an entry
+    :setup: Four suppliers replication setup, an entry
     :steps:
-        1. Modify the entry on master1 - add attribute
+        1. Modify the entry on supplier1 - add attribute
         2. Wait for replication to happen
-        3. Check entry on all other masters
-        4. Modify the entry on master1 - replace attribute
+        3. Check entry on all other suppliers
+        4. Modify the entry on supplier1 - replace attribute
         5. Wait for replication to happen
-        6. Check entry on all other masters
-        7. Modify the entry on master1 - delete attribute
+        6. Check entry on all other suppliers
+        7. Modify the entry on supplier1 - delete attribute
         8. Wait for replication to happen
-        9. Check entry on all other masters
+        9. Check entry on all other suppliers
     :expectedresults:
         1. Attribute should be successfully added
         2. Some time should pass
-        3. The change should be present on all masters
+        3. The change should be present on all suppliers
         4. Attribute should be successfully replaced
         5. Some time should pass
-        6. The change should be present on all masters
+        6. The change should be present on all suppliers
         7. Attribute should be successfully deleted
         8. Some time should pass
-        9. The change should be present on all masters
+        9. The change should be present on all suppliers
     """
 
     log.info('Modifying entry {} - add operation'.format(TEST_ENTRY_DN))
 
-    test_user = UserAccount(topo_m4.ms["master1"], TEST_ENTRY_DN)
+    test_user = UserAccount(topo_m4.ms["supplier1"], TEST_ENTRY_DN)
     test_user.add('mail', '{}@redhat.com'.format(TEST_ENTRY_NAME))
     time.sleep(1)
 
@@ -156,17 +156,17 @@ def test_delete_entry(topo_m4, create_entry):
     """Check that entry deletion is replicated after delete operation
 
     :id: 18437262-9d6a-4b98-a47a-6182501ab9bc
-    :setup: Four masters replication setup, an entry
+    :setup: Four suppliers replication setup, an entry
     :steps:
-        1. Delete the entry from master1
-        2. Check entry on all other masters
+        1. Delete the entry from supplier1
+        2. Check entry on all other suppliers
     :expectedresults:
         1. The entry should be deleted
-        2. The change should be present on all masters
+        2. The change should be present on all suppliers
     """
 
     log.info('Deleting entry {} during the test'.format(TEST_ENTRY_DN))
-    topo_m4.ms["master1"].delete_s(TEST_ENTRY_DN)
+    topo_m4.ms["supplier1"].delete_s(TEST_ENTRY_DN)
 
     entries = get_repl_entries(topo_m4, TEST_ENTRY_NAME, ["uid"])
     assert not entries, "Entry deletion {} wasn't replicated successfully".format(TEST_ENTRY_DN)
@@ -178,20 +178,20 @@ def test_modrdn_entry(topo_m4, create_entry, delold):
 
     :id: 02558e6d-a745-45ae-8d88-34fe9b16adc9
     :parametrized: yes
-    :setup: Four masters replication setup, an entry
+    :setup: Four suppliers replication setup, an entry
     :steps:
-        1. Make modrdn operation on entry on master1 with both delold 1 and 0
-        2. Check entry on all other masters
+        1. Make modrdn operation on entry on supplier1 with both delold 1 and 0
+        2. Check entry on all other suppliers
     :expectedresults:
         1. Modrdn operation should be successful
-        2. The change should be present on all masters
+        2. The change should be present on all suppliers
     """
 
     newrdn_name = 'newrdn'
     newrdn_dn = 'uid={},{}'.format(newrdn_name, DEFAULT_SUFFIX)
     log.info('Modify entry RDN {}'.format(TEST_ENTRY_DN))
     try:
-        topo_m4.ms["master1"].modrdn_s(TEST_ENTRY_DN, 'uid={}'.format(newrdn_name), delold)
+        topo_m4.ms["supplier1"].modrdn_s(TEST_ENTRY_DN, 'uid={}'.format(newrdn_name), delold)
     except ldap.LDAPError as e:
         log.error('Failed to modrdn entry (%s): error (%s)' % (TEST_ENTRY_DN,
                                                                e.message['desc']))
@@ -209,26 +209,26 @@ def test_modrdn_entry(topo_m4, create_entry, delold):
                 TEST_ENTRY_DN)
     finally:
         log.info('Remove entry with new RDN {}'.format(newrdn_dn))
-        topo_m4.ms["master1"].delete_s(newrdn_dn)
+        topo_m4.ms["supplier1"].delete_s(newrdn_dn)
 
 
 def test_modrdn_after_pause(topo_m4):
     """Check that changes are properly replicated after replica pause
 
     :id: 6271dc9c-a993-4a9e-9c6d-05650cdab282
-    :setup: Four masters replication setup, an entry
+    :setup: Four suppliers replication setup, an entry
     :steps:
         1. Pause all replicas
-        2. Make modrdn operation on entry on master1
+        2. Make modrdn operation on entry on supplier1
         3. Resume all replicas
         4. Wait for replication to happen
-        5. Check entry on all other masters
+        5. Check entry on all other suppliers
     :expectedresults:
         1. Replicas should be paused
         2. Modrdn operation should be successful
         3. Replicas should be resumed
         4. Some time should pass
-        5. The change should be present on all masters
+        5. The change should be present on all suppliers
     """
 
     newrdn_name = 'newrdn'
@@ -236,7 +236,7 @@ def test_modrdn_after_pause(topo_m4):
 
     log.info('Adding entry {}'.format(TEST_ENTRY_DN))
     try:
-        topo_m4.ms["master1"].add_s(Entry((TEST_ENTRY_DN, {
+        topo_m4.ms["supplier1"].add_s(Entry((TEST_ENTRY_DN, {
             'objectclass': 'top person'.split(),
             'objectclass': 'organizationalPerson',
             'objectclass': 'inetorgperson',
@@ -254,7 +254,7 @@ def test_modrdn_after_pause(topo_m4):
 
     log.info('Modify entry RDN {}'.format(TEST_ENTRY_DN))
     try:
-        topo_m4.ms["master1"].modrdn_s(TEST_ENTRY_DN, 'uid={}'.format(newrdn_name))
+        topo_m4.ms["supplier1"].modrdn_s(TEST_ENTRY_DN, 'uid={}'.format(newrdn_name))
     except ldap.LDAPError as e:
         log.error('Failed to modrdn entry (%s): error (%s)' % (TEST_ENTRY_DN,
                                                                e.message['desc']))
@@ -271,7 +271,7 @@ def test_modrdn_after_pause(topo_m4):
         assert all(entries_new), "Entry {} wasn't replicated successfully".format(newrdn_name)
     finally:
         log.info('Remove entry with new RDN {}'.format(newrdn_dn))
-        topo_m4.ms["master1"].delete_s(newrdn_dn)
+        topo_m4.ms["supplier1"].delete_s(newrdn_dn)
 
 
 @pytest.mark.bz842441
@@ -279,7 +279,7 @@ def test_modify_stripattrs(topo_m4):
     """Check that we can modify nsds5replicastripattrs
 
     :id: f36abed8-e262-4f35-98aa-71ae55611aaa
-    :setup: Four masters replication setup
+    :setup: Four suppliers replication setup
     :steps:
         1. Modify nsds5replicastripattrs attribute on any agreement
         2. Search for the modified attribute
@@ -288,7 +288,7 @@ def test_modify_stripattrs(topo_m4):
         2. The modified attribute should be the one we set
     """
 
-    m1 = topo_m4.ms["master1"]
+    m1 = topo_m4.ms["supplier1"]
     agreement = m1.agreement.list(suffix=DEFAULT_SUFFIX)[0].dn
     attr_value = b'modifiersname modifytimestamp'
 
@@ -304,7 +304,7 @@ def test_new_suffix(topo_m4, new_suffix):
     """Check that we can enable replication on a new suffix
 
     :id: d44a9ed4-26b0-4189-b0d0-b2b336ddccbd
-    :setup: Four masters replication setup, a new suffix
+    :setup: Four suppliers replication setup, a new suffix
     :steps:
         1. Enable replication on the new suffix
         2. Check if replication works
@@ -314,26 +314,26 @@ def test_new_suffix(topo_m4, new_suffix):
         2. Replication should work
         3. Replication on the new suffix should be disabled
     """
-    m1 = topo_m4.ms["master1"]
-    m2 = topo_m4.ms["master2"]
+    m1 = topo_m4.ms["supplier1"]
+    m2 = topo_m4.ms["supplier2"]
 
     repl = ReplicationManager(NEW_SUFFIX)
 
-    repl.create_first_master(m1)
+    repl.create_first_supplier(m1)
 
-    repl.join_master(m1, m2)
+    repl.join_supplier(m1, m2)
 
     repl.test_replication(m1, m2)
     repl.test_replication(m2, m1)
 
-    repl.remove_master(m1)
-    repl.remove_master(m2)
+    repl.remove_supplier(m1)
+    repl.remove_supplier(m2)
 
 def test_many_attrs(topo_m4, create_entry):
     """Check a replication with many attributes (add and delete)
 
     :id: d540b358-f67a-43c6-8df5-7c74b3cb7523
-    :setup: Four masters replication setup, a test entry
+    :setup: Four suppliers replication setup, a test entry
     :steps:
         1. Add 10 new attributes to the entry
         2. Delete few attributes: one from the beginning,
@@ -345,10 +345,10 @@ def test_many_attrs(topo_m4, create_entry):
         3. The changes should be replicated in the right order
     """
 
-    m1 = topo_m4.ms["master1"]
+    m1 = topo_m4.ms["supplier1"]
     add_list = ensure_list_bytes(map(lambda x: "test{}".format(x), range(10)))
     delete_list = ensure_list_bytes(map(lambda x: "test{}".format(x), [0, 4, 7, 9]))
-    test_user = UserAccount(topo_m4.ms["master1"], TEST_ENTRY_DN)
+    test_user = UserAccount(topo_m4.ms["supplier1"], TEST_ENTRY_DN)
 
     log.info('Modifying entry {} - 10 add operations'.format(TEST_ENTRY_DN))
     for add_name in add_list:
@@ -375,22 +375,22 @@ def test_double_delete(topo_m4, create_entry):
     """Check that double delete of the entry doesn't crash server
 
     :id: 5b85a5af-df29-42c7-b6cb-965ec5aa478e
-    :feature: Multi master replication
-    :setup: Four masters replication setup, a test entry
+    :feature: Multi supplier replication
+    :setup: Four suppliers replication setup, a test entry
     :steps: 1. Delete the entry
-            2. Delete the entry on the second master
+            2. Delete the entry on the second supplier
             3. Check that server is alive
     :expectedresults: Server hasn't crash
     """
 
-    log.info('Deleting entry {} from master1'.format(TEST_ENTRY_DN))
-    topo_m4.ms["master1"].delete_s(TEST_ENTRY_DN)
+    log.info('Deleting entry {} from supplier1'.format(TEST_ENTRY_DN))
+    topo_m4.ms["supplier1"].delete_s(TEST_ENTRY_DN)
 
-    log.info('Deleting entry {} from master2'.format(TEST_ENTRY_DN))
+    log.info('Deleting entry {} from supplier2'.format(TEST_ENTRY_DN))
     try:
-        topo_m4.ms["master2"].delete_s(TEST_ENTRY_DN)
+        topo_m4.ms["supplier2"].delete_s(TEST_ENTRY_DN)
     except ldap.NO_SUCH_OBJECT:
-        log.info("Entry {} wasn't found master2. It is expected.".format(TEST_ENTRY_DN))
+        log.info("Entry {} wasn't found supplier2. It is expected.".format(TEST_ENTRY_DN))
 
     log.info('Make searches to check if server is alive')
     entries = get_repl_entries(topo_m4, TEST_ENTRY_NAME, ["uid"])
@@ -401,16 +401,16 @@ def test_password_repl_error(topo_m4, create_entry):
     """Check that error about userpassword replication is properly logged
 
     :id: d4f12dc0-cd2c-4b92-9b8d-d764a60f0698
-    :feature: Multi master replication
-    :setup: Four masters replication setup, a test entry
-    :steps: 1. Change userpassword on master 1
+    :feature: Multi supplier replication
+    :setup: Four suppliers replication setup, a test entry
+    :steps: 1. Change userpassword on supplier 1
             2. Restart the servers to flush the logs
             3. Check the error log for an replication error
     :expectedresults: We don't have a replication error in the error log
     """
 
-    m1 = topo_m4.ms["master1"]
-    m2 = topo_m4.ms["master2"]
+    m1 = topo_m4.ms["supplier1"]
+    m2 = topo_m4.ms["supplier2"]
     TEST_ENTRY_NEW_PASS = 'new_{}'.format(TEST_ENTRY_NAME)
 
     log.info('Clean the error log')
@@ -419,17 +419,17 @@ def test_password_repl_error(topo_m4, create_entry):
     log.info('Set replication loglevel')
     m2.config.loglevel((ErrorLog.REPLICA,))
 
-    log.info('Modifying entry {} - change userpassword on master 2'.format(TEST_ENTRY_DN))
-    test_user_m1 = UserAccount(topo_m4.ms["master1"], TEST_ENTRY_DN)
-    test_user_m2 = UserAccount(topo_m4.ms["master2"], TEST_ENTRY_DN)
-    test_user_m3 = UserAccount(topo_m4.ms["master3"], TEST_ENTRY_DN)
-    test_user_m4 = UserAccount(topo_m4.ms["master4"], TEST_ENTRY_DN)
+    log.info('Modifying entry {} - change userpassword on supplier 2'.format(TEST_ENTRY_DN))
+    test_user_m1 = UserAccount(topo_m4.ms["supplier1"], TEST_ENTRY_DN)
+    test_user_m2 = UserAccount(topo_m4.ms["supplier2"], TEST_ENTRY_DN)
+    test_user_m3 = UserAccount(topo_m4.ms["supplier3"], TEST_ENTRY_DN)
+    test_user_m4 = UserAccount(topo_m4.ms["supplier4"], TEST_ENTRY_DN)
 
     test_user_m1.set('userpassword', TEST_ENTRY_NEW_PASS)
 
     log.info('Restart the servers to flush the logs')
     for num in range(1, 5):
-        topo_m4.ms["master{}".format(num)].restart(timeout=10)
+        topo_m4.ms["supplier{}".format(num)].restart(timeout=10)
 
     m1_conn = test_user_m1.bind(TEST_ENTRY_NEW_PASS)
     m2_conn = test_user_m2.bind(TEST_ENTRY_NEW_PASS)
@@ -444,7 +444,7 @@ def test_invalid_agmt(topo_m4):
     """Test adding that an invalid agreement is properly rejected and does not crash the server
 
     :id: 92f10f46-1be1-49ca-9358-784359397bc2
-    :setup: MMR with four masters
+    :setup: MMR with four suppliers
     :steps:
         1. Add invalid agreement (nsds5ReplicaEnabled set to invalid value)
         2. Verify the server is still running
@@ -452,7 +452,7 @@ def test_invalid_agmt(topo_m4):
         1. Invalid repl agreement should be rejected
         2. Server should be still running
     """
-    m1 = topo_m4.ms["master1"]
+    m1 = topo_m4.ms["supplier1"]
 
     # Add invalid agreement (nsds5ReplicaEnabled set to invalid value)
     AGMT_DN = 'cn=whatever,cn=replica,cn="dc=example,dc=com",cn=mapping tree,cn=config'
@@ -481,7 +481,7 @@ def test_warining_for_invalid_replica(topo_m4):
     """Testing logs to indicate the inconsistency when configuration is performed.
 
     :id: dd689d03-69b8-4bf9-a06e-2acd19d5e2c8
-    :setup: MMR with four masters
+    :setup: MMR with four suppliers
     :steps:
         1. Setup nsds5ReplicaBackoffMin to 20
         2. Setup nsds5ReplicaBackoffMax to 10
@@ -489,7 +489,7 @@ def test_warining_for_invalid_replica(topo_m4):
         1. nsds5ReplicaBackoffMin should set to 20
         2. An error should be generated and also logged in the error logs.
     """
-    replicas = Replicas(topo_m4.ms["master1"])
+    replicas = Replicas(topo_m4.ms["supplier1"])
     replica = replicas.list()[0]
     log.info('Set nsds5ReplicaBackoffMin to 20')
     replica.set('nsds5ReplicaBackoffMin', '20')
@@ -499,14 +499,14 @@ def test_warining_for_invalid_replica(topo_m4):
     log.info('Resetting configuration: nsds5ReplicaBackoffMin')
     replica.remove_all('nsds5ReplicaBackoffMin')
     log.info('Check the error log for the error')
-    assert topo_m4.ms["master1"].ds_error_log.match('.*nsds5ReplicaBackoffMax.*10.*invalid.*')
+    assert topo_m4.ms["supplier1"].ds_error_log.match('.*nsds5ReplicaBackoffMax.*10.*invalid.*')
 
 @pytest.mark.ds51082
 def test_csnpurge_large_valueset(topo_m2):
     """Test csn generator test
 
     :id: 63e2bdb2-0a8f-4660-9465-7b80a9f72a74
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. Create a test_user
         2. add a large set of values (more than 10)
@@ -522,7 +522,7 @@ def test_csnpurge_large_valueset(topo_m2):
         5. Should succeeds
         6. Should not crash
     """
-    m1 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier2"]
 
     test_user = UserAccount(m1, TEST_ENTRY_DN)
     if test_user.exists():
@@ -564,7 +564,7 @@ def test_urp_trigger_substring_search(topo_m2):
 
     :id: 9869bb39-419f-42c3-a44b-c93eb0b77667
     :customerscenario: True
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. enable internal operation loggging for plugins
         2. Create on M1 a test_user with a '*' in its DN
@@ -576,8 +576,8 @@ def test_urp_trigger_substring_search(topo_m2):
         3. Should succeeds
         4. Should succeeds
     """
-    m1 = topo_m2.ms["master1"]
-    m2 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier1"]
+    m2 = topo_m2.ms["supplier2"]
 
     # Enable loggging of internal operation logging to capture URP intop
     log.info('Set nsslapd-plugin-logging to on')
@@ -626,7 +626,7 @@ def test_csngen_task(topo_m2):
     """Test csn generator test
 
     :id: b976849f-dbed-447e-91a7-c877d5d71fd0
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. Create a csngen_test task
         2. Check that debug messages "_csngen_gen_tester_main" are in errors logs
@@ -634,7 +634,7 @@ def test_csngen_task(topo_m2):
         1. Should succeeds
         2. Should succeeds
     """
-    m1 = topo_m2.ms["master1"]
+    m1 = topo_m2.ms["supplier1"]
     csngen_task = csngenTestTask(m1)
     csngen_task.create(properties={
         'ttl': '300'

+ 9 - 9
dirsrvtests/tests/suites/replication/cascading_test.py

@@ -53,7 +53,7 @@ def test_basic_with_hub(topo):
     policy state attributes.
 
     :id: 4ac85552-45bc-477b-89a4-226dfff8c6cc
-    :setup: 1 master, 1 hub, 1 consumer
+    :setup: 1 supplier, 1 hub, 1 consumer
     :steps:
         1. Enable memberOf plugin and set password account lockout settings
         2. Restart the instance
@@ -79,7 +79,7 @@ def test_basic_with_hub(topo):
     """
 
     repl_manager = ReplicationManager(DEFAULT_SUFFIX)
-    master = topo.ms["master1"]
+    supplier = topo.ms["supplier1"]
     consumer = topo.cs["consumer1"]
     hub = topo.hs["hub1"]
 
@@ -91,7 +91,7 @@ def test_basic_with_hub(topo):
         inst.config.set('passwordIsGlobalPolicy', 'on')
 
     # Create user
-    user1 = UserAccount(master, BIND_DN)
+    user1 = UserAccount(supplier, BIND_DN)
     user_props = TEST_USER_PROPERTIES.copy()
     user_props.update({'sn': BIND_RDN,
                        'cn': BIND_RDN,
@@ -102,27 +102,27 @@ def test_basic_with_hub(topo):
     user1.create(properties=user_props, basedn=SUFFIX)
 
     # Create group
-    groups = Groups(master, DEFAULT_SUFFIX)
+    groups = Groups(supplier, DEFAULT_SUFFIX)
     group = groups.create(properties={'cn': 'group'})
 
     # Test replication
-    repl_manager.test_replication(master, consumer)
+    repl_manager.test_replication(supplier, consumer)
 
     # Trigger memberOf plugin by adding user to group
     group.replace('member', user1.dn)
 
     # Test replication once more
-    repl_manager.test_replication(master, consumer)
+    repl_manager.test_replication(supplier, consumer)
 
     # Issue bad password to update passwordRetryCount
     try:
-        master.simple_bind_s(user1.dn, "badpassword")
+        supplier.simple_bind_s(user1.dn, "badpassword")
     except:
         pass
 
     # Test replication one last time
-    master.simple_bind_s(DN_DM, PASSWORD)
-    repl_manager.test_replication(master, consumer)
+    supplier.simple_bind_s(DN_DM, PASSWORD)
+    repl_manager.test_replication(supplier, consumer)
 
     # Finally check if passwordRetyCount was replicated to the hub and consumer
     user1 = UserAccount(hub, BIND_DN)

+ 2 - 2
dirsrvtests/tests/suites/replication/changelog_encryption_test.py

@@ -23,7 +23,7 @@ def test_cl_encryption_setup_process(topo):
     encryption
 
     :id: 1a1b7d29-69f5-4f0e-91c4-e7f66140ff17
-    :setup: Master Instance, Consumer Instance
+    :setup: Supplier Instance, Consumer Instance
     :steps:
         1. Enable TLS for the server
         2. Export changelog
@@ -38,7 +38,7 @@ def test_cl_encryption_setup_process(topo):
         5. Success
     """
 
-    supplier = topo.ms['master1']
+    supplier = topo.ms['supplier1']
     consumer = topo.cs['consumer1']
 
     # Enable TLS

+ 66 - 66
dirsrvtests/tests/suites/replication/changelog_test.py

@@ -50,7 +50,7 @@ def _perform_ldap_operations(topo):
     """Add a test user, modify description, modrdn user and delete it"""
 
     log.info('Adding user {}'.format(TEST_ENTRY_NAME))
-    users = UserAccounts(topo.ms['master1'], DEFAULT_SUFFIX)
+    users = UserAccounts(topo.ms['supplier1'], DEFAULT_SUFFIX)
     user_properties = {
         'uid': TEST_ENTRY_NAME,
         'cn': TEST_ENTRY_NAME,
@@ -64,7 +64,7 @@ def _perform_ldap_operations(topo):
     tuser.replace('description', 'newdesc')
     log.info('Modify RDN of user {}'.format(tuser.dn))
     try:
-        topo.ms['master1'].modrdn_s(tuser.dn, 'uid={}'.format(NEW_RDN_NAME), 0)
+        topo.ms['supplier1'].modrdn_s(tuser.dn, 'uid={}'.format(NEW_RDN_NAME), 0)
     except ldap.LDAPError as e:
         log.fatal('Failed to modrdn entry {}'.format(tuser.dn))
         raise e
@@ -78,12 +78,12 @@ def _create_changelog_dump(topo):
 
     log.info('Dump changelog using nss5task and check if ldap operations are logged')
     if ds_supports_new_changelog():
-        changelog_dir = topo.ms['master1'].get_ldif_dir()
+        changelog_dir = topo.ms['supplier1'].get_ldif_dir()
         changelog_end = '_cl.ldif'
     else:
-        changelog_dir = topo.ms['master1'].get_changelog_dir()
+        changelog_dir = topo.ms['supplier1'].get_changelog_dir()
         changelog_end = '.ldif'
-    replicas = Replicas(topo.ms["master1"])
+    replicas = Replicas(topo.ms["supplier1"])
     replica = replicas.get(DEFAULT_SUFFIX)
     log.info('Remove ldif files, if present in: {}'.format(changelog_dir))
     for files in os.listdir(changelog_dir):
@@ -144,26 +144,26 @@ def changelog_init(topo):
     log.info('Testing Ticket 47669 - Test duration syntax in the changelogs')
 
     # bind as directory manager
-    topo.ms["master1"].log.info("Bind as %s" % DN_DM)
-    topo.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topo.ms["supplier1"].log.info("Bind as %s" % DN_DM)
+    topo.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
 
     if not ds_supports_new_changelog():
         try:
-            changelogdir = os.path.join(os.path.dirname(topo.ms["master1"].dbdir), 'changelog')
-            topo.ms["master1"].modify_s(CHANGELOG, [(ldap.MOD_REPLACE, 'nsslapd-changelogdir',
+            changelogdir = os.path.join(os.path.dirname(topo.ms["supplier1"].dbdir), 'changelog')
+            topo.ms["supplier1"].modify_s(CHANGELOG, [(ldap.MOD_REPLACE, 'nsslapd-changelogdir',
                                                                        ensure_bytes(changelogdir))])
         except ldap.LDAPError as e:
             log.error('Failed to modify ' + CHANGELOG + ': error {}'.format(get_ldap_error_msg(e,'desc')))
             assert False
 
     try:
-        topo.ms["master1"].modify_s(RETROCHANGELOG, [(ldap.MOD_REPLACE, 'nsslapd-pluginEnabled', b'on')])
+        topo.ms["supplier1"].modify_s(RETROCHANGELOG, [(ldap.MOD_REPLACE, 'nsslapd-pluginEnabled', b'on')])
     except ldap.LDAPError as e:
         log.error('Failed to enable ' + RETROCHANGELOG + ': error {}'.format(get_ldap_error_msg(e, 'desc')))
         assert False
 
     # restart the server
-    topo.ms["master1"].restart(timeout=10)
+    topo.ms["supplier1"].restart(timeout=10)
 
 
 def add_and_check(topo, plugin, attr, val, isvalid):
@@ -173,7 +173,7 @@ def add_and_check(topo, plugin, attr, val, isvalid):
     if isvalid:
         log.info('Test %s: %s -- valid' % (attr, val))
         try:
-            topo.ms["master1"].modify_s(plugin, [(ldap.MOD_REPLACE, attr, ensure_bytes(val))])
+            topo.ms["supplier1"].modify_s(plugin, [(ldap.MOD_REPLACE, attr, ensure_bytes(val))])
         except ldap.LDAPError as e:
             log.error('Failed to add ' + attr + ': ' + val + ' to ' + plugin + ': error {}'.format(get_ldap_error_msg(e,'desc')))
             assert False
@@ -181,18 +181,18 @@ def add_and_check(topo, plugin, attr, val, isvalid):
         log.info('Test %s: %s -- invalid' % (attr, val))
         if plugin == CHANGELOG:
             try:
-                topo.ms["master1"].modify_s(plugin, [(ldap.MOD_REPLACE, attr, ensure_bytes(val))])
+                topo.ms["supplier1"].modify_s(plugin, [(ldap.MOD_REPLACE, attr, ensure_bytes(val))])
             except ldap.LDAPError as e:
                 log.error('Expectedly failed to add ' + attr + ': ' + val +
                           ' to ' + plugin + ': error {}'.format(get_ldap_error_msg(e,'desc')))
         else:
             try:
-                topo.ms["master1"].modify_s(plugin, [(ldap.MOD_REPLACE, attr, ensure_bytes(val))])
+                topo.ms["supplier1"].modify_s(plugin, [(ldap.MOD_REPLACE, attr, ensure_bytes(val))])
             except ldap.LDAPError as e:
                 log.error('Failed to add ' + attr + ': ' + val + ' to ' + plugin + ': error {}'.format(get_ldap_error_msg(e,'desc')))
 
     try:
-        entries = topo.ms["master1"].search_s(plugin, ldap.SCOPE_BASE, FILTER, [attr])
+        entries = topo.ms["supplier1"].search_s(plugin, ldap.SCOPE_BASE, FILTER, [attr])
         if isvalid:
             if not entries[0].hasValue(attr, val):
                 log.fatal('%s does not have expected (%s: %s)' % (plugin, attr, val))
@@ -215,9 +215,9 @@ def remove_ldif_files_from_changelogdir(topo, extension):
     Remove existing ldif files from changelog dir
     """
     if ds_supports_new_changelog():
-        changelog_dir = topo.ms['master1'].get_ldif_dir()
+        changelog_dir = topo.ms['supplier1'].get_ldif_dir()
     else:
-        changelog_dir = topo.ms['master1'].get_changelog_dir()
+        changelog_dir = topo.ms['supplier1'].get_changelog_dir()
 
     log.info('Remove %s files, if present in: %s' % (extension, changelog_dir))
     for files in os.listdir(changelog_dir):
@@ -241,7 +241,7 @@ def test_cldump_files_removed(topo):
     """Verify bz1685059 : cl-dump generated ldif files are removed at the end, -l option is the way to keep them
 
     :id: fbb2f2a3-167b-4bc6-b513-9e0318b09edc
-    :setup: Replication with two master, nsslapd-changelogdir is '/var/lib/dirsrv/slapd-master1/changelog'
+    :setup: Replication with two supplier, nsslapd-changelogdir is '/var/lib/dirsrv/slapd-supplier1/changelog'
             retrochangelog plugin disabled
     :steps:
         1. Clean the changelog directory, removing .ldif files present, if any
@@ -268,7 +268,7 @@ def test_cldump_files_removed(topo):
         10. .ldif.done generated files are present in the changelog dir
      """
 
-    changelog_dir = topo.ms['master1'].get_changelog_dir()
+    changelog_dir = topo.ms['supplier1'].get_changelog_dir()
 
     # Remove existing .ldif files in changelog dir
     remove_ldif_files_from_changelogdir(topo, '.ldif')
@@ -284,7 +284,7 @@ def test_cldump_files_removed(topo):
     # This piece of code will serve as reproducer and verification mean for bz1769296
 
     log.info("Use cl-dump perl script without -l option : no generated ldif files should remain in %s " % changelog_dir)
-    cmdline=['/usr/bin/cl-dump', '-h', HOST_MASTER_1, '-p', 'invalid port', '-D', DN_DM, '-w', PASSWORD]
+    cmdline=['/usr/bin/cl-dump', '-h', HOST_SUPPLIER_1, '-p', 'invalid port', '-D', DN_DM, '-w', PASSWORD]
     log.info('Command used : %s' % cmdline)
     proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE)
     msg = proc.communicate()
@@ -294,7 +294,7 @@ def test_cldump_files_removed(topo):
     # Now the core goal of the test case
     # Using cl-dump without -l option
     log.info("Use cl-dump perl script without -l option : no generated ldif files should remain in %s " % changelog_dir)
-    cmdline=['/usr/bin/cl-dump', '-h', HOST_MASTER_1, '-p', str(PORT_MASTER_1), '-D', DN_DM, '-w', PASSWORD]
+    cmdline=['/usr/bin/cl-dump', '-h', HOST_SUPPLIER_1, '-p', str(PORT_SUPPLIER_1), '-D', DN_DM, '-w', PASSWORD]
     log.info('Command used : %s' % cmdline)
     proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE)
     proc.communicate()
@@ -314,7 +314,7 @@ def test_cldump_files_removed(topo):
 
     # Using cl-dump with -l option
     log.info("Use cl-dump perl script with -l option : generated ldif files should be kept in %s " % changelog_dir)
-    cmdline=['/usr/bin/cl-dump', '-h', HOST_MASTER_1, '-p', str(PORT_MASTER_1), '-D', DN_DM, '-w', PASSWORD, '-l']
+    cmdline=['/usr/bin/cl-dump', '-h', HOST_SUPPLIER_1, '-p', str(PORT_SUPPLIER_1), '-D', DN_DM, '-w', PASSWORD, '-l']
     log.info('Command used : %s' % cmdline)
     proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE)
     msg = proc.communicate()
@@ -338,7 +338,7 @@ def test_dsconf_dump_changelog_files_removed(topo):
     """Verify that the python counterpart of cl-dump (using dsconf) has a correct management of generated files
 
     :id: e41dcf90-098a-4386-acb5-789384579bf7
-    :setup: Replication with two master, nsslapd-changelogdir is '/var/lib/dirsrv/slapd-master1/changelog'
+    :setup: Replication with two supplier, nsslapd-changelogdir is '/var/lib/dirsrv/slapd-supplier1/changelog'
             retrochangelog plugin disabled
     :steps:
         1. Clean the changelog directory, removing .ldif files present, if any
@@ -366,11 +366,11 @@ def test_dsconf_dump_changelog_files_removed(topo):
      """
 
     if ds_supports_new_changelog():
-        changelog_dir = topo.ms['master1'].get_ldif_dir()
+        changelog_dir = topo.ms['supplier1'].get_ldif_dir()
     else:
-        changelog_dir = topo.ms['master1'].get_changelog_dir()
-    instance = topo.ms['master1']
-    instance_url = 'ldap://%s:%s' % (HOST_MASTER_1, PORT_MASTER_1)
+        changelog_dir = topo.ms['supplier1'].get_changelog_dir()
+    instance = topo.ms['supplier1']
+    instance_url = 'ldap://%s:%s' % (HOST_SUPPLIER_1, PORT_SUPPLIER_1)
 
     # Remove existing .ldif files in changelog dir
     remove_ldif_files_from_changelogdir(topo, '.ldif')
@@ -439,7 +439,7 @@ def test_verify_changelog(topo):
     """Check if changelog dump file contains required ldap operations
 
     :id: 15ead076-8c18-410b-90eb-c2fe9eab966b
-    :setup: Replication with two masters.
+    :setup: Replication with two suppliers.
     :steps: 1. Add user to server.
             2. Perform ldap modify, modrdn and delete operations.
             3. Dump the changelog to a file using nsds5task.
@@ -461,7 +461,7 @@ def test_verify_changelog_online_backup(topo):
     """Check ldap operations in changelog dump file after online backup
 
     :id: 4001c34f-35b4-439e-8c2d-fa7e30375219
-    :setup: Replication with two masters.
+    :setup: Replication with two suppliers.
     :steps: 1. Add user to server.
             2. Take online backup using db2bak task.
             3. Restore the database using bak2db task.
@@ -477,10 +477,10 @@ def test_verify_changelog_online_backup(topo):
             6. Changelog dump file should contain ldap operations
     """
 
-    backup_dir = os.path.join(topo.ms['master1'].get_bak_dir(), 'online_backup')
+    backup_dir = os.path.join(topo.ms['supplier1'].get_bak_dir(), 'online_backup')
     log.info('Run db2bak script to take database backup')
     try:
-        topo.ms['master1'].tasks.db2bak(backup_dir=backup_dir, args={TASK_WAIT: True})
+        topo.ms['supplier1'].tasks.db2bak(backup_dir=backup_dir, args={TASK_WAIT: True})
     except ValueError:
         log.fatal('test_changelog5: Online backup failed')
         assert False
@@ -497,7 +497,7 @@ def test_verify_changelog_online_backup(topo):
 
     log.info('Run bak2db to restore directory server')
     try:
-        topo.ms['master1'].tasks.bak2db(backup_dir=backup_dir, args={TASK_WAIT: True})
+        topo.ms['supplier1'].tasks.bak2db(backup_dir=backup_dir, args={TASK_WAIT: True})
     except ValueError:
         log.fatal('test_changelog5: Online restore failed')
         assert False
@@ -512,7 +512,7 @@ def test_verify_changelog_offline_backup(topo):
     """Check ldap operations in changelog dump file after offline backup
 
     :id: feed290d-57dd-46e4-9ab3-422c77589867
-    :setup: Replication with two masters.
+    :setup: Replication with two suppliers.
     :steps: 1. Add user to server.
             2. Stop server and take offline backup using db2bak.
             3. Restore the database using bak2db.
@@ -528,23 +528,23 @@ def test_verify_changelog_offline_backup(topo):
             6. Changelog dump file should contain ldap operations
     """
 
-    backup_dir = os.path.join(topo.ms['master1'].get_bak_dir(), 'offline_backup')
+    backup_dir = os.path.join(topo.ms['supplier1'].get_bak_dir(), 'offline_backup')
 
-    topo.ms['master1'].stop()
+    topo.ms['supplier1'].stop()
     log.info('Run db2bak to take database backup')
     try:
-        topo.ms['master1'].db2bak(backup_dir)
+        topo.ms['supplier1'].db2bak(backup_dir)
     except ValueError:
         log.fatal('test_changelog5: Offline backup failed')
         assert False
 
     log.info('Run bak2db to restore directory server')
     try:
-        topo.ms['master1'].bak2db(backup_dir)
+        topo.ms['supplier1'].bak2db(backup_dir)
     except ValueError:
         log.fatal('test_changelog5: Offline restore failed')
         assert False
-    topo.ms['master1'].start()
+    topo.ms['supplier1'].start()
 
     if ds_supports_new_changelog():
         backup_checkdir = os.path.join(backup_dir, DEFAULT_BENAME, BDB_CL_FILENAME)
@@ -567,8 +567,8 @@ def test_changelog_maxage(topo, changelog_init):
     """Check nsslapd-changelog max age values
 
     :id: d284ff27-03b2-412c-ac74-ac4f2d2fae3b
-    :setup: Replication with two master, change nsslapd-changelogdir to
-            '/var/lib/dirsrv/slapd-master1/changelog' and
+    :setup: Replication with two supplier, change nsslapd-changelogdir to
+            '/var/lib/dirsrv/slapd-supplier1/changelog' and
             set cn=Retro Changelog Plugin,cn=plugins,cn=config to 'on'
     :steps:
         1. Set nsslapd-changelogmaxage in cn=changelog5,cn=config to values - '12345','10s','30M','12h','2D','4w'
@@ -581,8 +581,8 @@ def test_changelog_maxage(topo, changelog_init):
     log.info('1. Test nsslapd-changelogmaxage in cn=changelog5,cn=config')
 
     # bind as directory manager
-    topo.ms["master1"].log.info("Bind as %s" % DN_DM)
-    topo.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topo.ms["supplier1"].log.info("Bind as %s" % DN_DM)
+    topo.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
 
     add_and_check(topo, CHANGELOG, MAXAGE, '12345', True)
     add_and_check(topo, CHANGELOG, MAXAGE, '10s', True)
@@ -599,8 +599,8 @@ def test_ticket47669_changelog_triminterval(topo, changelog_init):
     """Check nsslapd-changelog triminterval values
 
     :id: 8f850c37-7e7c-49dd-a4e0-9344638616d6
-    :setup: Replication with two master, change nsslapd-changelogdir to
-            '/var/lib/dirsrv/slapd-master1/changelog' and
+    :setup: Replication with two supplier, change nsslapd-changelogdir to
+            '/var/lib/dirsrv/slapd-supplier1/changelog' and
             set cn=Retro Changelog Plugin,cn=plugins,cn=config to 'on'
     :steps:
         1. Set nsslapd-changelogtrim-interval in cn=changelog5,cn=config to values -
@@ -614,8 +614,8 @@ def test_ticket47669_changelog_triminterval(topo, changelog_init):
     log.info('2. Test nsslapd-changelogtrim-interval in cn=changelog5,cn=config')
 
     # bind as directory manager
-    topo.ms["master1"].log.info("Bind as %s" % DN_DM)
-    topo.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topo.ms["supplier1"].log.info("Bind as %s" % DN_DM)
+    topo.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
 
     add_and_check(topo, CHANGELOG, TRIMINTERVAL, '12345', True)
     add_and_check(topo, CHANGELOG, TRIMINTERVAL, '10s', True)
@@ -633,8 +633,8 @@ def test_changelog_compactdbinterval(topo, changelog_init):
     """Check nsslapd-changelog compactdbinterval values
 
     :id: 0f4b3118-9dfa-4c2a-945c-72847b42a48c
-    :setup: Replication with two master, change nsslapd-changelogdir to
-            '/var/lib/dirsrv/slapd-master1/changelog' and
+    :setup: Replication with two supplier, change nsslapd-changelogdir to
+            '/var/lib/dirsrv/slapd-supplier1/changelog' and
             set cn=Retro Changelog Plugin,cn=plugins,cn=config to 'on'
     :steps:
         1. Set nsslapd-changelogcompactdb-interval in cn=changelog5,cn=config to values -
@@ -649,8 +649,8 @@ def test_changelog_compactdbinterval(topo, changelog_init):
     log.info('3. Test nsslapd-changelogcompactdb-interval in cn=changelog5,cn=config')
 
     # bind as directory manager
-    topo.ms["master1"].log.info("Bind as %s" % DN_DM)
-    topo.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topo.ms["supplier1"].log.info("Bind as %s" % DN_DM)
+    topo.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
 
     add_and_check(topo, CHANGELOG, COMPACTDBINTERVAL, '12345', True)
     add_and_check(topo, CHANGELOG, COMPACTDBINTERVAL, '10s', True)
@@ -667,8 +667,8 @@ def test_retrochangelog_maxage(topo, changelog_init):
     """Check nsslapd-retrochangelog max age values
 
     :id: 0cb84d81-3e86-4dbf-84a2-66aefd8281db
-    :setup: Replication with two master, change nsslapd-changelogdir to
-            '/var/lib/dirsrv/slapd-master1/changelog' and
+    :setup: Replication with two supplier, change nsslapd-changelogdir to
+            '/var/lib/dirsrv/slapd-supplier1/changelog' and
             set cn=Retro Changelog Plugin,cn=plugins,cn=config to 'on'
     :steps:
         1. Set nsslapd-changelogmaxage in cn=Retro Changelog Plugin,cn=plugins,cn=config to values -
@@ -683,8 +683,8 @@ def test_retrochangelog_maxage(topo, changelog_init):
     log.info('4. Test nsslapd-changelogmaxage in cn=Retro Changelog Plugin,cn=plugins,cn=config')
 
     # bind as directory manager
-    topo.ms["master1"].log.info("Bind as %s" % DN_DM)
-    topo.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topo.ms["supplier1"].log.info("Bind as %s" % DN_DM)
+    topo.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
 
     add_and_check(topo, RETROCHANGELOG, MAXAGE, '12345', True)
     add_and_check(topo, RETROCHANGELOG, MAXAGE, '10s', True)
@@ -695,7 +695,7 @@ def test_retrochangelog_maxage(topo, changelog_init):
     add_and_check(topo, RETROCHANGELOG, MAXAGE, '-123', False)
     add_and_check(topo, RETROCHANGELOG, MAXAGE, 'xyz', False)
 
-    topo.ms["master1"].log.info("ticket47669 was successfully verified.")
+    topo.ms["supplier1"].log.info("ticket47669 was successfully verified.")
 
 @pytest.mark.ds50736
 def test_retrochangelog_trimming_crash(topo, changelog_init):
@@ -704,8 +704,8 @@ def test_retrochangelog_trimming_crash(topo, changelog_init):
 
     :id: 5d9bd7ca-e9bf-4be9-8fc8-902aa5513052
     :customerscenario: True
-    :setup: Replication with two master, change nsslapd-changelogdir to
-            '/var/lib/dirsrv/slapd-master1/changelog' and
+    :setup: Replication with two supplier, change nsslapd-changelogdir to
+            '/var/lib/dirsrv/slapd-supplier1/changelog' and
             set cn=Retro Changelog Plugin,cn=plugins,cn=config to 'on'
     :steps:
         1. Set nsslapd-changelogmaxage in cn=Retro Changelog Plugin,cn=plugins,cn=config to value '-1'
@@ -724,23 +724,23 @@ def test_retrochangelog_trimming_crash(topo, changelog_init):
 
     # set the nsslapd-changelogmaxage directly on dse.ldif
     # because the set value is invalid
-    topo.ms["master1"].log.info("ticket50736 start verification")
-    topo.ms["master1"].stop()
-    retroPlugin = RetroChangelogPlugin(topo.ms["master1"])
-    dse_ldif = DSEldif(topo.ms["master1"])
+    topo.ms["supplier1"].log.info("ticket50736 start verification")
+    topo.ms["supplier1"].stop()
+    retroPlugin = RetroChangelogPlugin(topo.ms["supplier1"])
+    dse_ldif = DSEldif(topo.ms["supplier1"])
     dse_ldif.replace(retroPlugin.dn, 'nsslapd-changelogmaxage', '-1')
-    topo.ms["master1"].start()
+    topo.ms["supplier1"].start()
 
     # The crash should be systematic, but just in case do several restart
     # with a delay to let all plugin init
     for i in range(5):
         time.sleep(1)
-        topo.ms["master1"].stop()
-        topo.ms["master1"].start()
+        topo.ms["supplier1"].stop()
+        topo.ms["supplier1"].start()
 
-    assert not topo.ms["master1"].detectDisorderlyShutdown()
+    assert not topo.ms["supplier1"].detectDisorderlyShutdown()
 
-    topo.ms["master1"].log.info("ticket 50736 was successfully verified.")
+    topo.ms["supplier1"].log.info("ticket 50736 was successfully verified.")
 
 
 

+ 30 - 30
dirsrvtests/tests/suites/replication/changelog_trimming_test.py

@@ -24,19 +24,19 @@ MAXAGE = 'nsslapd-changelogmaxage'
 MAXENTRIES = 'nsslapd-changelogmaxentries'
 TRIMINTERVAL = 'nsslapd-changelogtrim-interval'
 
-def do_mods(master, num):
+def do_mods(supplier, num):
     """Perform a num of mods on the default suffix
     """
-    domain = Domain(master, DEFAULT_SUFFIX)
+    domain = Domain(supplier, DEFAULT_SUFFIX)
     for i in range(num):
         domain.replace('description', 'change %s' % i)
 
-def set_value(master, attr, val):
+def set_value(supplier, attr, val):
     """
     Helper function to add/replace attr: val and check the added value
     """
     try:
-        master.modify_s(CHANGELOG, [(ldap.MOD_REPLACE, attr, ensure_bytes(val))])
+        supplier.modify_s(CHANGELOG, [(ldap.MOD_REPLACE, attr, ensure_bytes(val))])
     except ldap.LDAPError as e:
         log.error('Failed to add ' + attr + ': ' + val + ' to ' + plugin + ': error {}'.format(get_ldap_error_msg(e,'desc')))
         assert False
@@ -45,29 +45,29 @@ def set_value(master, attr, val):
 def setup_max_entries(topo, request):
     """Configure logging and changelog max entries
     """
-    master = topo.ms["master1"]
+    supplier = topo.ms["supplier1"]
 
-    master.config.loglevel((ErrorLog.REPLICA,), 'error')
+    supplier.config.loglevel((ErrorLog.REPLICA,), 'error')
 
     if ds_supports_new_changelog():
-        set_value(master, MAXENTRIES, '2')
-        set_value(master, TRIMINTERVAL, '300')
+        set_value(supplier, MAXENTRIES, '2')
+        set_value(supplier, TRIMINTERVAL, '300')
     else:
-        cl = Changelog5(master)
+        cl = Changelog5(supplier)
         cl.set_trim_interval('300')
 
 @pytest.fixture(scope="module")
 def setup_max_age(topo, request):
     """Configure logging and changelog max age
     """
-    master = topo.ms["master1"]
-    master.config.loglevel((ErrorLog.REPLICA,), 'error')
+    supplier = topo.ms["supplier1"]
+    supplier.config.loglevel((ErrorLog.REPLICA,), 'error')
 
     if ds_supports_new_changelog():
-        set_value(master, MAXAGE, '5')
-        set_value(master, TRIMINTERVAL, '300')
+        set_value(supplier, MAXAGE, '5')
+        set_value(supplier, TRIMINTERVAL, '300')
     else:
-        cl = Changelog5(master)
+        cl = Changelog5(supplier)
         cl.set_max_age('5')
         cl.set_trim_interval('300')
 
@@ -75,7 +75,7 @@ def test_max_age(topo, setup_max_age):
     """Test changing the trimming interval works with max age
 
     :id: b5de04a5-4d92-49ea-a725-1d278a1c647c
-    :setup: single master
+    :setup: single supplier
     :steps:
         1. Perform modification to populate changelog
         2. Adjust the changelog trimming interval
@@ -89,30 +89,30 @@ def test_max_age(topo, setup_max_age):
     """
     log.info("Testing changelog trimming interval with max age...")
 
-    master = topo.ms["master1"]
+    supplier = topo.ms["supplier1"]
     if not ds_supports_new_changelog():
-        cl = Changelog5(master)
+        cl = Changelog5(supplier)
 
     # Do mods to build if cl entries
-    do_mods(master, 10)
+    do_mods(supplier, 10)
 
     time.sleep(1)  # Trimming should not have occurred
-    if master.searchErrorsLog("Trimmed") is True:
+    if supplier.searchErrorsLog("Trimmed") is True:
         log.fatal('Trimming event unexpectedly occurred')
         assert False
 
     if ds_supports_new_changelog():
-        set_value(master, TRIMINTERVAL, '5')
+        set_value(supplier, TRIMINTERVAL, '5')
     else:
         cl.set_trim_interval('5')
 
     time.sleep(3)  # Trimming should not have occurred
-    if master.searchErrorsLog("Trimmed") is True:
+    if supplier.searchErrorsLog("Trimmed") is True:
         log.fatal('Trimming event unexpectedly occurred')
         assert False
 
     time.sleep(3)  # Trimming should have occurred
-    if master.searchErrorsLog("Trimmed") is False:
+    if supplier.searchErrorsLog("Trimmed") is False:
         log.fatal('Trimming event did not occur')
         assert False
 
@@ -121,7 +121,7 @@ def test_max_entries(topo, setup_max_entries):
     """Test changing the trimming interval works with max entries
 
     :id: b5de04a5-4d92-49ea-a725-1d278a1c647d
-    :setup: single master
+    :setup: single supplier
     :steps:
         1. Perform modification to populate changelog
         2. Adjust the changelog trimming interval
@@ -135,28 +135,28 @@ def test_max_entries(topo, setup_max_entries):
     """
 
     log.info("Testing changelog triming interval with max entries...")
-    master = topo.ms["master1"]
+    supplier = topo.ms["supplier1"]
     if not ds_supports_new_changelog():
-        cl = Changelog5(master)
+        cl = Changelog5(supplier)
 
     # reset errors log
-    master.deleteErrorLogs()
+    supplier.deleteErrorLogs()
 
     # Do mods to build if cl entries
-    do_mods(master, 10)
+    do_mods(supplier, 10)
 
     time.sleep(1)  # Trimming should have occurred
-    if master.searchErrorsLog("Trimmed") is True:
+    if supplier.searchErrorsLog("Trimmed") is True:
         log.fatal('Trimming event unexpectedly occurred')
         assert False
 
     if ds_supports_new_changelog():
-        set_value(master, TRIMINTERVAL, '5')
+        set_value(supplier, TRIMINTERVAL, '5')
     else:
         cl.set_trim_interval('5')
 
     time.sleep(6)  # Trimming should have occurred
-    if master.searchErrorsLog("Trimmed") is False:
+    if supplier.searchErrorsLog("Trimmed") is False:
         log.fatal('Trimming event did not occur')
         assert False
 

+ 9 - 9
dirsrvtests/tests/suites/replication/cleanallruv_max_tasks_test.py

@@ -25,9 +25,9 @@ def test_max_tasks(topology_m4):
     "restore" the instance after running this test
 
     :id: c34d0b40-3c3e-4f53-8656-5e4c2a310a1f
-    :setup: Replication setup with four masters
+    :setup: Replication setup with four suppliers
     :steps:
-        1. Stop masters 3 & 4
+        1. Stop suppliers 3 & 4
         2. Create over 64 tasks between m1 and m2
         3. Check logs to see if (>64) tasks were rejected
 
@@ -37,15 +37,15 @@ def test_max_tasks(topology_m4):
         3. Success
     """
 
-    # Stop masters 3 & 4
-    m1 = topology_m4.ms["master1"]
-    m2 = topology_m4.ms["master2"]
-    m3 = topology_m4.ms["master3"]
-    m4 = topology_m4.ms["master4"]
+    # Stop suppliers 3 & 4
+    m1 = topology_m4.ms["supplier1"]
+    m2 = topology_m4.ms["supplier2"]
+    m3 = topology_m4.ms["supplier3"]
+    m4 = topology_m4.ms["supplier4"]
     m3.stop()
     m4.stop()
 
-    # Add over 64 tasks between master1 & 2 to try to exceed the 64 task limit
+    # Add over 64 tasks between supplier1 & 2 to try to exceed the 64 task limit
     for i in range(1, 64):
         cruv_task = CleanAllRUVTask(m1)
         cruv_task.create(properties={
@@ -60,7 +60,7 @@ def test_max_tasks(topology_m4):
             'replica-force-cleaning': 'yes',  # This allows the tasks to propagate
         })
 
-    # Check the errors log for our error message in master 1
+    # Check the errors log for our error message in supplier 1
     assert m1.searchErrorsLog('Exceeded maximum number of active CLEANALLRUV tasks')
 
 

+ 235 - 235
dirsrvtests/tests/suites/replication/cleanallruv_test.py

@@ -57,7 +57,7 @@ class AddUsers(threading.Thread):
                     'gidNumber' : '%s' % (1000 + idx),
                     'homeDirectory' : '/home/testuser%s' % idx
                 })
-            # One of the masters was probably put into read only mode - just break out
+            # One of the suppliers was probably put into read only mode - just break out
             except ldap.UNWILLING_TO_PERFORM:
                 break
             except ldap.ALREADY_EXISTS:
@@ -65,20 +65,20 @@ class AddUsers(threading.Thread):
         conn.close()
 
 
-def remove_master4_agmts(msg, topology_m4):
-    """Remove all the repl agmts to master4. """
+def remove_supplier4_agmts(msg, topology_m4):
+    """Remove all the repl agmts to supplier4. """
 
-    log.info('%s: remove all the agreements to master 4...' % msg)
+    log.info('%s: remove all the agreements to supplier 4...' % msg)
     repl = ReplicationManager(DEFAULT_SUFFIX)
     # This will delete m4 frm the topo *and* remove all incoming agreements
     # to m4.
-    repl.remove_master(topology_m4.ms["master4"],
-        [topology_m4.ms["master1"], topology_m4.ms["master2"], topology_m4.ms["master3"]])
+    repl.remove_supplier(topology_m4.ms["supplier4"],
+        [topology_m4.ms["supplier1"], topology_m4.ms["supplier2"], topology_m4.ms["supplier3"]])
 
 
 def check_ruvs(msg, topology_m4, m4rid):
-    """Check masters 1- 3 for master 4's rid."""
-    for inst in (topology_m4.ms["master1"], topology_m4.ms["master2"], topology_m4.ms["master3"]):
+    """Check suppliers 1- 3 for supplier 4's rid."""
+    for inst in (topology_m4.ms["supplier1"], topology_m4.ms["supplier2"], topology_m4.ms["supplier3"]):
         clean = False
         replicas = Replicas(inst)
         replica = replicas.get(DEFAULT_SUFFIX)
@@ -93,7 +93,7 @@ def check_ruvs(msg, topology_m4, m4rid):
             else:
                 clean = True
         if not clean:
-            raise Exception("Master %s was not cleaned in time." % inst.serverid)
+            raise Exception("Supplier %s was not cleaned in time." % inst.serverid)
     return True
 
 
@@ -107,7 +107,7 @@ def task_done(topology_m4, task_dn, timeout=60):
 
     while not done and count < timeout:
         try:
-            entry = topology_m4.ms["master1"].getEntry(task_dn, attrlist=attrlist)
+            entry = topology_m4.ms["supplier1"].getEntry(task_dn, attrlist=attrlist)
             if entry is not None:
                 if entry.hasAttr('nsTaskExitCode'):
                     done = True
@@ -126,26 +126,26 @@ def task_done(topology_m4, task_dn, timeout=60):
     return done
 
 
-def restore_master4(topology_m4):
-    """In our tests will always be removing master 4, so we need a common
+def restore_supplier4(topology_m4):
+    """In our tests will always be removing supplier 4, so we need a common
     way to restore it for another test
     """
 
-    # Restart the remaining masters to allow rid 4 to be reused.
+    # Restart the remaining suppliers to allow rid 4 to be reused.
     for inst in topology_m4.ms.values():
         inst.restart()
 
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    repl.join_master(topology_m4.ms["master1"], topology_m4.ms["master4"])
+    repl.join_supplier(topology_m4.ms["supplier1"], topology_m4.ms["supplier4"])
 
     # Add the 2,3 -> 4 agmt.
-    repl.ensure_agreement(topology_m4.ms["master2"], topology_m4.ms["master4"])
-    repl.ensure_agreement(topology_m4.ms["master3"], topology_m4.ms["master4"])
+    repl.ensure_agreement(topology_m4.ms["supplier2"], topology_m4.ms["supplier4"])
+    repl.ensure_agreement(topology_m4.ms["supplier3"], topology_m4.ms["supplier4"])
     # And in reverse ...
-    repl.ensure_agreement(topology_m4.ms["master4"], topology_m4.ms["master2"])
-    repl.ensure_agreement(topology_m4.ms["master4"], topology_m4.ms["master3"])
+    repl.ensure_agreement(topology_m4.ms["supplier4"], topology_m4.ms["supplier2"])
+    repl.ensure_agreement(topology_m4.ms["supplier4"], topology_m4.ms["supplier3"])
 
-    log.info('Master 4 has been successfully restored.')
+    log.info('Supplier 4 has been successfully restored.')
 
 
 @pytest.fixture()
@@ -155,16 +155,16 @@ def m4rid(request, topology_m4):
     log.debug("-------------- BEGIN RESET of m4 -----------------")
     repl = ReplicationManager(DEFAULT_SUFFIX)
     repl.test_replication_topology(topology_m4.ms.values())
-    # What is master4's rid?
-    m4rid = repl.get_rid(topology_m4.ms["master4"])
+    # What is supplier4's rid?
+    m4rid = repl.get_rid(topology_m4.ms["supplier4"])
 
     def fin():
         try:
-            # Restart the masters and rerun cleanallruv
+            # Restart the suppliers and rerun cleanallruv
             for inst in topology_m4.ms.values():
                 inst.restart()
 
-            cruv_task = CleanAllRUVTask(topology_m4.ms["master1"])
+            cruv_task = CleanAllRUVTask(topology_m4.ms["supplier1"])
             cruv_task.create(properties={
                 'replica-id': m4rid,
                 'replica-base-dn': DEFAULT_SUFFIX,
@@ -174,7 +174,7 @@ def m4rid(request, topology_m4):
         except ldap.UNWILLING_TO_PERFORM:
             # In some casse we already cleaned rid4, so if we fail, it's okay
             pass
-        restore_master4(topology_m4)
+        restore_supplier4(topology_m4)
         # Make sure everything works.
         repl.test_replication_topology(topology_m4.ms.values())
     request.addfinalizer(fin)
@@ -186,30 +186,30 @@ def test_clean(topology_m4, m4rid):
     """Check that cleanallruv task works properly
 
     :id: e9b3ce5c-e17c-409e-aafc-e97d630f2878
-    :setup: Replication setup with four masters
+    :setup: Replication setup with four suppliers
     :steps:
-        1. Check that replication works on all masters
-        2. Disable replication on master 4
-        3. Remove agreements to master 4 from other masters
-        4. Run a cleanallruv task on master 1 with a 'force' option 'on'
+        1. Check that replication works on all suppliers
+        2. Disable replication on supplier 4
+        3. Remove agreements to supplier 4 from other suppliers
+        4. Run a cleanallruv task on supplier 1 with a 'force' option 'on'
         5. Check that everything was cleaned
     :expectedresults:
-        1. Replication should work properly on all masters
+        1. Replication should work properly on all suppliers
         2. Operation should be successful
-        3. Agreements to master 4 should be removed
+        3. Agreements to supplier 4 should be removed
         4. Cleanallruv task should be successfully executed
         5. Everything should be cleaned
     """
 
     log.info('Running test_clean...')
-    # Disable master 4
-    # Remove the agreements from the other masters that point to master 4
-    log.info('test_clean: disable master 4...')
-    remove_master4_agmts("test_clean", topology_m4)
+    # Disable supplier 4
+    # Remove the agreements from the other suppliers that point to supplier 4
+    log.info('test_clean: disable supplier 4...')
+    remove_supplier4_agmts("test_clean", topology_m4)
 
     # Run the task
     log.info('test_clean: run the cleanAllRUV task...')
-    cruv_task = CleanAllRUVTask(topology_m4.ms["master1"])
+    cruv_task = CleanAllRUVTask(topology_m4.ms["supplier1"])
     cruv_task.create(properties={
         'replica-id': m4rid,
         'replica-base-dn': DEFAULT_SUFFIX,
@@ -217,55 +217,55 @@ def test_clean(topology_m4, m4rid):
         })
     cruv_task.wait()
 
-    # Check the other master's RUV for 'replica 4'
-    log.info('test_clean: check all the masters have been cleaned...')
+    # Check the other supplier's RUV for 'replica 4'
+    log.info('test_clean: check all the suppliers have been cleaned...')
     clean = check_ruvs("test_clean", topology_m4, m4rid)
     assert clean
 
-    log.info('test_clean PASSED, restoring master 4...')
+    log.info('test_clean PASSED, restoring supplier 4...')
 
 
 def test_clean_restart(topology_m4, m4rid):
     """Check that cleanallruv task works properly after a restart
 
     :id: c6233bb3-092c-4919-9ac9-80dd02cc6e02
-    :setup: Replication setup with four masters
+    :setup: Replication setup with four suppliers
     :steps:
-        1. Disable replication on master 4
-        2. Remove agreements to master 4 from other masters
-        3. Stop master 3
-        4. Run a cleanallruv task on master 1
-        5. Stop master 1
-        6. Start master 3
+        1. Disable replication on supplier 4
+        2. Remove agreements to supplier 4 from other suppliers
+        3. Stop supplier 3
+        4. Run a cleanallruv task on supplier 1
+        5. Stop supplier 1
+        6. Start supplier 3
         7. Make sure that no crash happened
-        8. Start master 1
+        8. Start supplier 1
         9. Make sure that no crash happened
         10. Check that everything was cleaned
     :expectedresults:
         1. Operation should be successful
-        2. Agreements to master 4 should be removed
-        3. Master 3 should be stopped
+        2. Agreements to supplier 4 should be removed
+        3. Supplier 3 should be stopped
         4. Cleanallruv task should be successfully executed
-        5. Master 1 should be stopped
-        6. Master 3 should be started
+        5. Supplier 1 should be stopped
+        6. Supplier 3 should be started
         7. No crash should happened
-        8. Master 1 should be started
+        8. Supplier 1 should be started
         9. No crash should happened
         10. Everything should be cleaned
     """
     log.info('Running test_clean_restart...')
 
-    # Disable master 4
-    log.info('test_clean: disable master 4...')
-    # Remove the agreements from the other masters that point to master 4
-    remove_master4_agmts("test_clean", topology_m4)
+    # Disable supplier 4
+    log.info('test_clean: disable supplier 4...')
+    # Remove the agreements from the other suppliers that point to supplier 4
+    remove_supplier4_agmts("test_clean", topology_m4)
 
-    # Stop master 3 to keep the task running, so we can stop master 1...
-    topology_m4.ms["master3"].stop()
+    # Stop supplier 3 to keep the task running, so we can stop supplier 1...
+    topology_m4.ms["supplier3"].stop()
 
     # Run the task
     log.info('test_clean: run the cleanAllRUV task...')
-    cruv_task = CleanAllRUVTask(topology_m4.ms["master1"])
+    cruv_task = CleanAllRUVTask(topology_m4.ms["supplier1"])
     cruv_task.create(properties={
         'replica-id': m4rid,
         'replica-base-dn': DEFAULT_SUFFIX,
@@ -273,72 +273,72 @@ def test_clean_restart(topology_m4, m4rid):
         'replica-certify-all': 'yes'
         })
 
-    # Sleep a bit, then stop master 1
+    # Sleep a bit, then stop supplier 1
     time.sleep(5)
-    topology_m4.ms["master1"].stop()
+    topology_m4.ms["supplier1"].stop()
 
-    # Now start master 3 & 1, and make sure we didn't crash
-    topology_m4.ms["master3"].start()
-    if topology_m4.ms["master3"].detectDisorderlyShutdown():
-        log.fatal('test_clean_restart: Master 3 previously crashed!')
+    # Now start supplier 3 & 1, and make sure we didn't crash
+    topology_m4.ms["supplier3"].start()
+    if topology_m4.ms["supplier3"].detectDisorderlyShutdown():
+        log.fatal('test_clean_restart: Supplier 3 previously crashed!')
         assert False
 
-    topology_m4.ms["master1"].start(timeout=30)
-    if topology_m4.ms["master1"].detectDisorderlyShutdown():
-        log.fatal('test_clean_restart: Master 1 previously crashed!')
+    topology_m4.ms["supplier1"].start(timeout=30)
+    if topology_m4.ms["supplier1"].detectDisorderlyShutdown():
+        log.fatal('test_clean_restart: Supplier 1 previously crashed!')
         assert False
 
-    # Check the other master's RUV for 'replica 4'
-    log.info('test_clean_restart: check all the masters have been cleaned...')
+    # Check the other supplier's RUV for 'replica 4'
+    log.info('test_clean_restart: check all the suppliers have been cleaned...')
     clean = check_ruvs("test_clean_restart", topology_m4, m4rid)
     assert clean
 
-    log.info('test_clean_restart PASSED, restoring master 4...')
+    log.info('test_clean_restart PASSED, restoring supplier 4...')
 
 
 def test_clean_force(topology_m4, m4rid):
     """Check that multiple tasks with a 'force' option work properly
 
     :id: f8810dfe-d2d2-4dd9-ba03-5fc14896fabe
-    :setup: Replication setup with four masters
+    :setup: Replication setup with four suppliers
     :steps:
-        1. Stop master 3
-        2. Add a bunch of updates to master 4
-        3. Disable replication on master 4
-        4. Start master 3
-        5. Remove agreements to master 4 from other masters
-        6. Run a cleanallruv task on master 1 with a 'force' option 'on'
+        1. Stop supplier 3
+        2. Add a bunch of updates to supplier 4
+        3. Disable replication on supplier 4
+        4. Start supplier 3
+        5. Remove agreements to supplier 4 from other suppliers
+        6. Run a cleanallruv task on supplier 1 with a 'force' option 'on'
         7. Check that everything was cleaned
     :expectedresults:
-        1. Master 3 should be stopped
+        1. Supplier 3 should be stopped
         2. Operation should be successful
-        3. Replication on master 4 should be disabled
-        4. Master 3 should be started
-        5. Agreements to master 4 should be removed
+        3. Replication on supplier 4 should be disabled
+        4. Supplier 3 should be started
+        5. Agreements to supplier 4 should be removed
         6. Operation should be successful
         7. Everything should be cleaned
     """
 
     log.info('Running test_clean_force...')
 
-    # Stop master 3, while we update master 4, so that 3 is behind the other masters
-    topology_m4.ms["master3"].stop()
+    # Stop supplier 3, while we update supplier 4, so that 3 is behind the other suppliers
+    topology_m4.ms["supplier3"].stop()
 
-    # Add a bunch of updates to master 4
-    m4_add_users = AddUsers(topology_m4.ms["master4"], 1500)
+    # Add a bunch of updates to supplier 4
+    m4_add_users = AddUsers(topology_m4.ms["supplier4"], 1500)
     m4_add_users.start()
     m4_add_users.join()
 
-    # Start master 3, it should be out of sync with the other replicas...
-    topology_m4.ms["master3"].start()
+    # Start supplier 3, it should be out of sync with the other replicas...
+    topology_m4.ms["supplier3"].start()
 
-    # Remove the agreements from the other masters that point to master 4
-    remove_master4_agmts("test_clean_force", topology_m4)
+    # Remove the agreements from the other suppliers that point to supplier 4
+    remove_supplier4_agmts("test_clean_force", topology_m4)
 
-    # Run the task, use "force" because master 3 is not in sync with the other replicas
+    # Run the task, use "force" because supplier 3 is not in sync with the other replicas
     # in regards to the replica 4 RUV
     log.info('test_clean: run the cleanAllRUV task...')
-    cruv_task = CleanAllRUVTask(topology_m4.ms["master1"])
+    cruv_task = CleanAllRUVTask(topology_m4.ms["supplier1"])
     cruv_task.create(properties={
         'replica-id': m4rid,
         'replica-base-dn': DEFAULT_SUFFIX,
@@ -346,44 +346,44 @@ def test_clean_force(topology_m4, m4rid):
         })
     cruv_task.wait()
 
-    # Check the other master's RUV for 'replica 4'
-    log.info('test_clean_force: check all the masters have been cleaned...')
+    # Check the other supplier's RUV for 'replica 4'
+    log.info('test_clean_force: check all the suppliers have been cleaned...')
     clean = check_ruvs("test_clean_force", topology_m4, m4rid)
     assert clean
 
-    log.info('test_clean_force PASSED, restoring master 4...')
+    log.info('test_clean_force PASSED, restoring supplier 4...')
 
 
 def test_abort(topology_m4, m4rid):
     """Test the abort task basic functionality
 
     :id: b09a6887-8de0-4fac-8e41-73ccbaaf7a08
-    :setup: Replication setup with four masters
+    :setup: Replication setup with four suppliers
     :steps:
-        1. Disable replication on master 4
-        2. Remove agreements to master 4 from other masters
-        3. Stop master 2
-        4. Run a cleanallruv task on master 1
-        5. Run a cleanallruv abort task on master 1
+        1. Disable replication on supplier 4
+        2. Remove agreements to supplier 4 from other suppliers
+        3. Stop supplier 2
+        4. Run a cleanallruv task on supplier 1
+        5. Run a cleanallruv abort task on supplier 1
     :expectedresults: No hanging tasks left
-        1. Replication on master 4 should be disabled
-        2. Agreements to master 4 should be removed
-        3. Master 2 should be stopped
+        1. Replication on supplier 4 should be disabled
+        2. Agreements to supplier 4 should be removed
+        3. Supplier 2 should be stopped
         4. Operation should be successful
         5. Operation should be successful
     """
 
     log.info('Running test_abort...')
-    # Remove the agreements from the other masters that point to master 4
-    remove_master4_agmts("test_abort", topology_m4)
+    # Remove the agreements from the other suppliers that point to supplier 4
+    remove_supplier4_agmts("test_abort", topology_m4)
 
-    # Stop master 2
-    log.info('test_abort: stop master 2 to freeze the cleanAllRUV task...')
-    topology_m4.ms["master2"].stop()
+    # Stop supplier 2
+    log.info('test_abort: stop supplier 2 to freeze the cleanAllRUV task...')
+    topology_m4.ms["supplier2"].stop()
 
     # Run the task
     log.info('test_abort: add the cleanAllRUV task...')
-    cruv_task = CleanAllRUVTask(topology_m4.ms["master1"])
+    cruv_task = CleanAllRUVTask(topology_m4.ms["supplier1"])
     cruv_task.create(properties={
         'replica-id': m4rid,
         'replica-base-dn': DEFAULT_SUFFIX,
@@ -396,59 +396,59 @@ def test_abort(topology_m4, m4rid):
     # Abort the task
     cruv_task.abort()
 
-    # Check master 1 does not have the clean task running
-    log.info('test_abort: check master 1 no longer has a cleanAllRUV task...')
+    # Check supplier 1 does not have the clean task running
+    log.info('test_abort: check supplier 1 no longer has a cleanAllRUV task...')
     if not task_done(topology_m4, cruv_task.dn):
         log.fatal('test_abort: CleanAllRUV task was not aborted')
         assert False
 
-    # Start master 2
-    log.info('test_abort: start master 2 to begin the restore process...')
-    topology_m4.ms["master2"].start()
+    # Start supplier 2
+    log.info('test_abort: start supplier 2 to begin the restore process...')
+    topology_m4.ms["supplier2"].start()
 
-    log.info('test_abort PASSED, restoring master 4...')
+    log.info('test_abort PASSED, restoring supplier 4...')
 
 
 def test_abort_restart(topology_m4, m4rid):
     """Test the abort task can handle a restart, and then resume
 
     :id: b66e33d4-fe85-4e1c-b882-75da80f70ab3
-    :setup: Replication setup with four masters
+    :setup: Replication setup with four suppliers
     :steps:
-        1. Disable replication on master 4
-        2. Remove agreements to master 4 from other masters
-        3. Stop master 3
-        4. Run a cleanallruv task on master 1
-        5. Run a cleanallruv abort task on master 1
-        6. Restart master 1
+        1. Disable replication on supplier 4
+        2. Remove agreements to supplier 4 from other suppliers
+        3. Stop supplier 3
+        4. Run a cleanallruv task on supplier 1
+        5. Run a cleanallruv abort task on supplier 1
+        6. Restart supplier 1
         7. Make sure that no crash happened
-        8. Start master 3
-        9. Check master 1 does not have the clean task running
+        8. Start supplier 3
+        9. Check supplier 1 does not have the clean task running
         10. Check that errors log doesn't have 'Aborting abort task' message
     :expectedresults:
-        1. Replication on master 4 should be disabled
-        2. Agreements to master 4 should be removed
-        3. Master 3 should be stopped
+        1. Replication on supplier 4 should be disabled
+        2. Agreements to supplier 4 should be removed
+        3. Supplier 3 should be stopped
         4. Operation should be successful
         5. Operation should be successful
-        6. Master 1 should be restarted
+        6. Supplier 1 should be restarted
         7. No crash should happened
-        8. Master 3 should be started
-        9. Check master 1 shouldn't have the clean task running
+        8. Supplier 3 should be started
+        9. Check supplier 1 shouldn't have the clean task running
         10. Errors log shouldn't have 'Aborting abort task' message
     """
 
     log.info('Running test_abort_restart...')
-    # Remove the agreements from the other masters that point to master 4
-    remove_master4_agmts("test_abort", topology_m4)
+    # Remove the agreements from the other suppliers that point to supplier 4
+    remove_supplier4_agmts("test_abort", topology_m4)
 
-    # Stop master 3
-    log.info('test_abort_restart: stop master 3 to freeze the cleanAllRUV task...')
-    topology_m4.ms["master3"].stop()
+    # Stop supplier 3
+    log.info('test_abort_restart: stop supplier 3 to freeze the cleanAllRUV task...')
+    topology_m4.ms["supplier3"].stop()
 
     # Run the task
     log.info('test_abort_restart: add the cleanAllRUV task...')
-    cruv_task = CleanAllRUVTask(topology_m4.ms["master1"])
+    cruv_task = CleanAllRUVTask(topology_m4.ms["supplier1"])
     cruv_task.create(properties={
         'replica-id': m4rid,
         'replica-base-dn': DEFAULT_SUFFIX,
@@ -461,63 +461,63 @@ def test_abort_restart(topology_m4, m4rid):
     # Abort the task
     cruv_task.abort(certify=True)
 
-    # Check master 1 does not have the clean task running
-    log.info('test_abort_abort: check master 1 no longer has a cleanAllRUV task...')
+    # Check supplier 1 does not have the clean task running
+    log.info('test_abort_abort: check supplier 1 no longer has a cleanAllRUV task...')
     if not task_done(topology_m4, cruv_task.dn):
         log.fatal('test_abort_restart: CleanAllRUV task was not aborted')
         assert False
 
-    # Now restart master 1, and make sure the abort process completes
-    topology_m4.ms["master1"].restart()
-    if topology_m4.ms["master1"].detectDisorderlyShutdown():
-        log.fatal('test_abort_restart: Master 1 previously crashed!')
+    # Now restart supplier 1, and make sure the abort process completes
+    topology_m4.ms["supplier1"].restart()
+    if topology_m4.ms["supplier1"].detectDisorderlyShutdown():
+        log.fatal('test_abort_restart: Supplier 1 previously crashed!')
         assert False
 
-    # Start master 3
-    topology_m4.ms["master3"].start()
+    # Start supplier 3
+    topology_m4.ms["supplier3"].start()
 
     # Need to wait 5 seconds before server processes any leftover tasks
     time.sleep(6)
 
-    # Check master 1 tried to run abort task.  We expect the abort task to be aborted.
-    if not topology_m4.ms["master1"].searchErrorsLog('Aborting abort task'):
+    # Check supplier 1 tried to run abort task.  We expect the abort task to be aborted.
+    if not topology_m4.ms["supplier1"].searchErrorsLog('Aborting abort task'):
         log.fatal('test_abort_restart: Abort task did not restart')
         assert False
 
-    log.info('test_abort_restart PASSED, restoring master 4...')
+    log.info('test_abort_restart PASSED, restoring supplier 4...')
 
 
 def test_abort_certify(topology_m4, m4rid):
     """Test the abort task with a replica-certify-all option
 
     :id: 78959966-d644-44a8-b98c-1fcf21b45eb0
-    :setup: Replication setup with four masters
+    :setup: Replication setup with four suppliers
     :steps:
-        1. Disable replication on master 4
-        2. Remove agreements to master 4 from other masters
-        3. Stop master 2
-        4. Run a cleanallruv task on master 1
-        5. Run a cleanallruv abort task on master 1 with a replica-certify-all option
+        1. Disable replication on supplier 4
+        2. Remove agreements to supplier 4 from other suppliers
+        3. Stop supplier 2
+        4. Run a cleanallruv task on supplier 1
+        5. Run a cleanallruv abort task on supplier 1 with a replica-certify-all option
     :expectedresults: No hanging tasks left
-        1. Replication on master 4 should be disabled
-        2. Agreements to master 4 should be removed
-        3. Master 2 should be stopped
+        1. Replication on supplier 4 should be disabled
+        2. Agreements to supplier 4 should be removed
+        3. Supplier 2 should be stopped
         4. Operation should be successful
         5. Operation should be successful
     """
 
     log.info('Running test_abort_certify...')
 
-    # Remove the agreements from the other masters that point to master 4
-    remove_master4_agmts("test_abort_certify", topology_m4)
+    # Remove the agreements from the other suppliers that point to supplier 4
+    remove_supplier4_agmts("test_abort_certify", topology_m4)
 
-    # Stop master 2
-    log.info('test_abort_certify: stop master 2 to freeze the cleanAllRUV task...')
-    topology_m4.ms["master2"].stop()
+    # Stop supplier 2
+    log.info('test_abort_certify: stop supplier 2 to freeze the cleanAllRUV task...')
+    topology_m4.ms["supplier2"].stop()
 
     # Run the task
     log.info('test_abort_certify: add the cleanAllRUV task...')
-    cruv_task = CleanAllRUVTask(topology_m4.ms["master1"])
+    cruv_task = CleanAllRUVTask(topology_m4.ms["supplier1"])
     cruv_task.create(properties={
         'replica-id': m4rid,
         'replica-base-dn': DEFAULT_SUFFIX,
@@ -538,77 +538,77 @@ def test_abort_certify(topology_m4, m4rid):
         log.fatal('test_abort_certify: abort task incorrectly finished')
         assert False
 
-    # Now start master 2 so it can be aborted
-    log.info('test_abort_certify: start master 2 to allow the abort task to finish...')
-    topology_m4.ms["master2"].start()
+    # Now start supplier 2 so it can be aborted
+    log.info('test_abort_certify: start supplier 2 to allow the abort task to finish...')
+    topology_m4.ms["supplier2"].start()
 
     # Wait for the abort task to stop
     if not task_done(topology_m4, abort_task.dn, 90):
         log.fatal('test_abort_certify: The abort CleanAllRUV task was not aborted')
         assert False
 
-    # Check master 1 does not have the clean task running
-    log.info('test_abort_certify: check master 1 no longer has a cleanAllRUV task...')
+    # Check supplier 1 does not have the clean task running
+    log.info('test_abort_certify: check supplier 1 no longer has a cleanAllRUV task...')
     if not task_done(topology_m4, cruv_task.dn):
         log.fatal('test_abort_certify: CleanAllRUV task was not aborted')
         assert False
 
-    log.info('test_abort_certify PASSED, restoring master 4...')
+    log.info('test_abort_certify PASSED, restoring supplier 4...')
 
 
 def test_stress_clean(topology_m4, m4rid):
     """Put each server(m1 - m4) under a stress, and perform the entire clean process
 
     :id: a8263cd6-f068-4357-86e0-e7c34504c8c5
-    :setup: Replication setup with four masters
+    :setup: Replication setup with four suppliers
     :steps:
-        1. Add a bunch of updates to all masters
-        2. Put master 4 to read-only mode
-        3. Disable replication on master 4
-        4. Remove agreements to master 4 from other masters
-        5. Run a cleanallruv task on master 1
+        1. Add a bunch of updates to all suppliers
+        2. Put supplier 4 to read-only mode
+        3. Disable replication on supplier 4
+        4. Remove agreements to supplier 4 from other suppliers
+        5. Run a cleanallruv task on supplier 1
         6. Check that everything was cleaned
     :expectedresults:
         1. Operation should be successful
-        2. Master 4 should be put to read-only mode
-        3. Replication on master 4 should be disabled
-        4. Agreements to master 4 should be removed
+        2. Supplier 4 should be put to read-only mode
+        3. Replication on supplier 4 should be disabled
+        4. Agreements to supplier 4 should be removed
         5. Operation should be successful
         6. Everything should be cleaned
     """
 
     log.info('Running test_stress_clean...')
-    log.info('test_stress_clean: put all the masters under load...')
+    log.info('test_stress_clean: put all the suppliers under load...')
 
-    ldbm_config = LDBMConfig(topology_m4.ms["master4"])
+    ldbm_config = LDBMConfig(topology_m4.ms["supplier4"])
 
-    # Put all the masters under load
+    # Put all the suppliers under load
     # not too high load else it takes a long time to converge and
     # the test result becomes instable
-    m1_add_users = AddUsers(topology_m4.ms["master1"], 500)
+    m1_add_users = AddUsers(topology_m4.ms["supplier1"], 500)
     m1_add_users.start()
-    m2_add_users = AddUsers(topology_m4.ms["master2"], 500)
+    m2_add_users = AddUsers(topology_m4.ms["supplier2"], 500)
     m2_add_users.start()
-    m3_add_users = AddUsers(topology_m4.ms["master3"], 500)
+    m3_add_users = AddUsers(topology_m4.ms["supplier3"], 500)
     m3_add_users.start()
-    m4_add_users = AddUsers(topology_m4.ms["master4"], 500)
+    m4_add_users = AddUsers(topology_m4.ms["supplier4"], 500)
     m4_add_users.start()
 
     # Allow sometime to get replication flowing in all directions
     log.info('test_stress_clean: allow some time for replication to get flowing...')
     time.sleep(5)
 
-    # Put master 4 into read only mode
+    # Put supplier 4 into read only mode
     ldbm_config.set('nsslapd-readonly', 'on')
-    # We need to wait for master 4 to push its changes out
-    log.info('test_stress_clean: allow some time for master 4 to push changes out (60 seconds)...')
+    # We need to wait for supplier 4 to push its changes out
+    log.info('test_stress_clean: allow some time for supplier 4 to push changes out (60 seconds)...')
     time.sleep(30)
 
-    # Remove the agreements from the other masters that point to master 4
-    remove_master4_agmts("test_stress_clean", topology_m4)
+    # Remove the agreements from the other suppliers that point to supplier 4
+    remove_supplier4_agmts("test_stress_clean", topology_m4)
 
     # Run the task
-    cruv_task = CleanAllRUVTask(topology_m4.ms["master1"])
+    cruv_task = CleanAllRUVTask(topology_m4.ms["supplier1"])
     cruv_task.create(properties={
         'replica-id': m4rid,
         'replica-base-dn': DEFAULT_SUFFIX,
@@ -623,20 +623,20 @@ def test_stress_clean(topology_m4, m4rid):
     m3_add_users.join()
     m4_add_users.join()
 
-    # Check the other master's RUV for 'replica 4'
+    # Check the other supplier's RUV for 'replica 4'
     log.info('test_stress_clean: check if all the replicas have been cleaned...')
     clean = check_ruvs("test_stress_clean", topology_m4, m4rid)
     assert clean
 
-    log.info('test_stress_clean:  PASSED, restoring master 4...')
+    log.info('test_stress_clean:  PASSED, restoring supplier 4...')
 
     # Sleep for a bit to replication complete
     log.info("Sleep for 120 seconds to allow replication to complete...")
     repl = ReplicationManager(DEFAULT_SUFFIX)
     repl.test_replication_topology([
-        topology_m4.ms["master1"],
-        topology_m4.ms["master2"],
-        topology_m4.ms["master3"],
+        topology_m4.ms["supplier1"],
+        topology_m4.ms["supplier2"],
+        topology_m4.ms["supplier3"],
         ], timeout=120)
 
     # Turn off readonly mode
@@ -647,22 +647,22 @@ def test_multiple_tasks_with_force(topology_m4, m4rid):
     """Check that multiple tasks with a 'force' option work properly
 
     :id: eb76a93d-8d1c-405e-9f25-6e8d5a781098
-    :setup: Replication setup with four masters
+    :setup: Replication setup with four suppliers
     :steps:
-        1. Stop master 3
-        2. Add a bunch of updates to master 4
-        3. Disable replication on master 4
-        4. Start master 3
-        5. Remove agreements to master 4 from other masters
-        6. Run a cleanallruv task on master 1 with a 'force' option 'on'
-        7. Run one more cleanallruv task on master 1 with a 'force' option 'off'
+        1. Stop supplier 3
+        2. Add a bunch of updates to supplier 4
+        3. Disable replication on supplier 4
+        4. Start supplier 3
+        5. Remove agreements to supplier 4 from other suppliers
+        6. Run a cleanallruv task on supplier 1 with a 'force' option 'on'
+        7. Run one more cleanallruv task on supplier 1 with a 'force' option 'off'
         8. Check that everything was cleaned
     :expectedresults:
-        1. Master 3 should be stopped
+        1. Supplier 3 should be stopped
         2. Operation should be successful
-        3. Replication on master 4 should be disabled
-        4. Master 3 should be started
-        5. Agreements to master 4 should be removed
+        3. Replication on supplier 4 should be disabled
+        4. Supplier 3 should be started
+        5. Agreements to supplier 4 should be removed
         6. Operation should be successful
         7. Operation should be successful
         8. Everything should be cleaned
@@ -670,25 +670,25 @@ def test_multiple_tasks_with_force(topology_m4, m4rid):
 
     log.info('Running test_multiple_tasks_with_force...')
 
-    # Stop master 3, while we update master 4, so that 3 is behind the other masters
-    topology_m4.ms["master3"].stop()
+    # Stop supplier 3, while we update supplier 4, so that 3 is behind the other suppliers
+    topology_m4.ms["supplier3"].stop()
 
-    # Add a bunch of updates to master 4
-    m4_add_users = AddUsers(topology_m4.ms["master4"], 1500)
+    # Add a bunch of updates to supplier 4
+    m4_add_users = AddUsers(topology_m4.ms["supplier4"], 1500)
     m4_add_users.start()
     m4_add_users.join()
 
-    # Start master 3, it should be out of sync with the other replicas...
-    topology_m4.ms["master3"].start()
+    # Start supplier 3, it should be out of sync with the other replicas...
+    topology_m4.ms["supplier3"].start()
 
-    # Disable master 4
-    # Remove the agreements from the other masters that point to master 4
-    remove_master4_agmts("test_multiple_tasks_with_force", topology_m4)
+    # Disable supplier 4
+    # Remove the agreements from the other suppliers that point to supplier 4
+    remove_supplier4_agmts("test_multiple_tasks_with_force", topology_m4)
 
-    # Run the task, use "force" because master 3 is not in sync with the other replicas
+    # Run the task, use "force" because supplier 3 is not in sync with the other replicas
     # in regards to the replica 4 RUV
     log.info('test_multiple_tasks_with_force: run the cleanAllRUV task with "force" on...')
-    cruv_task = CleanAllRUVTask(topology_m4.ms["master1"])
+    cruv_task = CleanAllRUVTask(topology_m4.ms["supplier1"])
     cruv_task.create(properties={
         'replica-id': m4rid,
         'replica-base-dn': DEFAULT_SUFFIX,
@@ -701,7 +701,7 @@ def test_multiple_tasks_with_force(topology_m4, m4rid):
     # NOTE: This must be try not py.test raises, because the above may or may
     # not have completed yet ....
     try:
-        cruv_task_fail = CleanAllRUVTask(topology_m4.ms["master1"])
+        cruv_task_fail = CleanAllRUVTask(topology_m4.ms["supplier1"])
         cruv_task_fail.create(properties={
             'replica-id': m4rid,
             'replica-base-dn': DEFAULT_SUFFIX,
@@ -714,12 +714,12 @@ def test_multiple_tasks_with_force(topology_m4, m4rid):
     # Wait for the force task ....
     cruv_task.wait()
 
-    # Check the other master's RUV for 'replica 4'
-    log.info('test_multiple_tasks_with_force: check all the masters have been cleaned...')
+    # Check the other supplier's RUV for 'replica 4'
+    log.info('test_multiple_tasks_with_force: check all the suppliers have been cleaned...')
     clean = check_ruvs("test_clean_force", topology_m4, m4rid)
     assert clean
-    # Check master 1 does not have the clean task running
-    log.info('test_abort: check master 1 no longer has a cleanAllRUV task...')
+    # Check supplier 1 does not have the clean task running
+    log.info('test_abort: check supplier 1 no longer has a cleanAllRUV task...')
     if not task_done(topology_m4, cruv_task.dn):
         log.fatal('test_abort: CleanAllRUV task was not aborted')
         assert False
@@ -731,16 +731,16 @@ def test_clean_shutdown_crash(topology_m2):
     """Check that server didn't crash after shutdown when running CleanAllRUV task
 
     :id: c34d0b40-3c3e-4f53-8656-5e4c2a310aaf
-    :setup: Replication setup with two masters
+    :setup: Replication setup with two suppliers
     :steps:
-        1. Enable TLS on both masters
+        1. Enable TLS on both suppliers
         2. Reconfigure both agreements to use TLS Client auth
-        3. Stop master2
+        3. Stop supplier2
         4. Run the CleanAllRUV task
-        5. Restart master1
-        6. Check if master1 didn't crash
-        7. Restart master1 again
-        8. Check if master1 didn't crash
+        5. Restart supplier1
+        6. Check if supplier1 didn't crash
+        7. Restart supplier1 again
+        8. Check if supplier1 didn't crash
 
     :expectedresults:
         1. Success
@@ -753,8 +753,8 @@ def test_clean_shutdown_crash(topology_m2):
         8. Success
     """
 
-    m1 = topology_m2.ms["master1"]
-    m2 = topology_m2.ms["master2"]
+    m1 = topology_m2.ms["supplier1"]
+    m2 = topology_m2.ms["supplier2"]
 
     repl = ReplicationManager(DEFAULT_SUFFIX)
 
@@ -800,7 +800,7 @@ def test_clean_shutdown_crash(topology_m2):
     )
     agmt_m2.remove_all('nsDS5ReplicaBindDN')
 
-    log.info('Stopping master2')
+    log.info('Stopping supplier2')
     m2.stop()
 
     log.info('Run the cleanAllRUV task')
@@ -814,7 +814,7 @@ def test_clean_shutdown_crash(topology_m2):
 
     m1.restart()
 
-    log.info('Check if master1 crashed')
+    log.info('Check if supplier1 crashed')
     assert not m1.detectDisorderlyShutdown()
 
     log.info('Repeat')

+ 56 - 56
dirsrvtests/tests/suites/replication/conflict_resolve_test.py

@@ -114,7 +114,7 @@ def _test_base(topology):
     audit log, error log for replica and access log for internal
     """
 
-    M1 = topology.ms["master1"]
+    M1 = topology.ms["supplier1"]
 
     conts = nsContainers(M1, SUFFIX)
     base_m2 = conts.ensure_state(properties={'cn': 'test_container'})
@@ -148,7 +148,7 @@ def base_m2(topology_m2, request):
 
     def fin():
         if not DEBUGGING:
-            _delete_test_base(topology_m2.ms["master1"], tb.dn)
+            _delete_test_base(topology_m2.ms["supplier1"], tb.dn)
     request.addfinalizer(fin)
 
     return tb
@@ -160,18 +160,18 @@ def base_m3(topology_m3, request):
 
     def fin():
         if not DEBUGGING:
-            _delete_test_base(topology_m3.ms["master1"], tb.dn)
+            _delete_test_base(topology_m3.ms["supplier1"], tb.dn)
     request.addfinalizer(fin)
 
     return tb
 
 
-class TestTwoMasters:
+class TestTwoSuppliers:
     def test_add_modrdn(self, topology_m2, base_m2):
         """Check that conflict properly resolved for create - modrdn operations
 
         :id: 77f09b18-03d1-45da-940b-1ad2c2908ebb
-        :setup: Two master replication, test container for entries, enable plugin logging,
+        :setup: Two supplier replication, test container for entries, enable plugin logging,
                 audit log, error log for replica and access log for internal
         :steps:
             1. Add five users to m1 and wait for replication to happen
@@ -183,7 +183,7 @@ class TestTwoMasters:
             7. Rename an entry on m1 and rename on m2. Use different entries
                but rename them to the same entry
             8. Resume replication
-            9. Check that the entries on both masters are the same and replication is working
+            9. Check that the entries on both suppliers are the same and replication is working
         :expectedresults:
             1. It should pass
             2. It should pass
@@ -195,8 +195,8 @@ class TestTwoMasters:
             8. It should pass
         """
 
-        M1 = topology_m2.ms["master1"]
-        M2 = topology_m2.ms["master2"]
+        M1 = topology_m2.ms["supplier1"]
+        M2 = topology_m2.ms["supplier2"]
         test_users_m1 = UserAccounts(M1, base_m2.dn, rdn=None)
         test_users_m2 = UserAccounts(M2, base_m2.dn, rdn=None)
         repl = ReplicationManager(SUFFIX)
@@ -242,7 +242,7 @@ class TestTwoMasters:
 
         :id: 77f09b18-03d1-45da-940b-1ad2c2908eb1
         :customerscenario: True
-        :setup: Two master replication, test container for entries, enable plugin logging,
+        :setup: Two supplier replication, test container for entries, enable plugin logging,
                 audit log, error log for replica and access log for internal
         :steps:
             1. Add ten users to m1 and wait for replication to happen
@@ -251,11 +251,11 @@ class TestTwoMasters:
             4. Test add-mod on m1 and add on m2
             5. Test add-modrdn on m1 and add on m2
             6. Test multiple add, modrdn
-            7. Test Add-del on both masters
+            7. Test Add-del on both suppliers
             8. Test modrdn-modrdn
             9. Test modrdn-del
             10. Resume replication
-            11. Check that the entries on both masters are the same and replication is working
+            11. Check that the entries on both suppliers are the same and replication is working
         :expectedresults:
             1. It should pass
             2. It should pass
@@ -270,8 +270,8 @@ class TestTwoMasters:
             11. It should pass
         """
 
-        M1 = topology_m2.ms["master1"]
-        M2 = topology_m2.ms["master2"]
+        M1 = topology_m2.ms["supplier1"]
+        M2 = topology_m2.ms["supplier2"]
 
         test_users_m1 = UserAccounts(M1, base_m2.dn, rdn=None)
         test_users_m2 = UserAccounts(M2, base_m2.dn, rdn=None)
@@ -339,7 +339,7 @@ class TestTwoMasters:
         _create_user(test_users_m1, user_num, sleep=True)
         _modify_user(test_users_m2, user_num, sleep=True)
 
-        log.info("Add - del on both masters")
+        log.info("Add - del on both suppliers")
         user_num += 1
         _create_user(test_users_m1, user_num)
         _delete_user(test_users_m1, user_num, sleep=True)
@@ -374,7 +374,7 @@ class TestTwoMasters:
         with memberOf and groups
 
         :id: 77f09b18-03d1-45da-940b-1ad2c2908eb3
-        :setup: Two master replication, test container for entries, enable plugin logging,
+        :setup: Two supplier replication, test container for entries, enable plugin logging,
                 audit log, error log for replica and access log for internal
         :steps:
             1. Enable memberOf plugin
@@ -386,7 +386,7 @@ class TestTwoMasters:
             7. Create a group on m2 and m1, delete from m1
             8. Create two different groups on m2
             9. Resume replication
-            10. Check that the entries on both masters are the same and replication is working
+            10. Check that the entries on both suppliers are the same and replication is working
         :expectedresults:
             1. It should pass
             2. It should pass
@@ -402,8 +402,8 @@ class TestTwoMasters:
 
         pytest.xfail("Issue 49591 - work in progress")
 
-        M1 = topology_m2.ms["master1"]
-        M2 = topology_m2.ms["master2"]
+        M1 = topology_m2.ms["supplier1"]
+        M2 = topology_m2.ms["supplier2"]
         test_users_m1 = UserAccounts(M1, base_m2.dn, rdn=None)
         test_groups_m1 = Groups(M1, base_m2.dn, rdn=None)
         test_groups_m2 = Groups(M2, base_m2.dn, rdn=None)
@@ -469,17 +469,17 @@ class TestTwoMasters:
         with managed entries
 
         :id: 77f09b18-03d1-45da-940b-1ad2c2908eb4
-        :setup: Two master replication, test container for entries, enable plugin logging,
+        :setup: Two supplier replication, test container for entries, enable plugin logging,
                 audit log, error log for replica and access log for internal
         :steps:
             1. Create ou=managed_users and ou=managed_groups under test container
             2. Configure managed entries plugin and add a template to test container
             3. Add a user to m1 and wait for replication to happen
             4. Pause replication
-            5. Create a user on m1 and m2 with a same group ID on both master
-            6. Create a user on m1 and m2 with a different group ID on both master
+            5. Create a user on m1 and m2 with a same group ID on both supplier
+            6. Create a user on m1 and m2 with a different group ID on both supplier
             7. Resume replication
-            8. Check that the entries on both masters are the same and replication is working
+            8. Check that the entries on both suppliers are the same and replication is working
         :expectedresults:
             1. It should pass
             2. It should pass
@@ -493,8 +493,8 @@ class TestTwoMasters:
 
         pytest.xfail("Issue 49591 - work in progress")
 
-        M1 = topology_m2.ms["master1"]
-        M2 = topology_m2.ms["master2"]
+        M1 = topology_m2.ms["supplier1"]
+        M2 = topology_m2.ms["supplier2"]
         repl = ReplicationManager(SUFFIX)
 
         ous = OrganizationalUnits(M1, DEFAULT_SUFFIX)
@@ -548,23 +548,23 @@ class TestTwoMasters:
         with nested entries with children
 
         :id: 77f09b18-03d1-45da-940b-1ad2c2908eb5
-        :setup: Two master replication, test container for entries, enable plugin logging,
+        :setup: Two supplier replication, test container for entries, enable plugin logging,
                 audit log, error log for replica and access log for internal
         :steps:
             1. Add 15 containers to m1 and wait for replication to happen
             2. Pause replication
-            3. Create parent-child on master2 and master1
-            4. Create parent-child on master1 and master2
-            5. Create parent-child on master1 and master2 different child rdn
-            6. Create parent-child on master1 and delete parent on master2
-            7. Create parent on master1, delete it and parent-child on master2, delete them
-            8. Create parent on master1, delete it and parent-two children on master2
-            9. Create parent-two children on master1 and parent-child on master2, delete them
+            3. Create parent-child on supplier2 and supplier1
+            4. Create parent-child on supplier1 and supplier2
+            5. Create parent-child on supplier1 and supplier2 different child rdn
+            6. Create parent-child on supplier1 and delete parent on supplier2
+            7. Create parent on supplier1, delete it and parent-child on supplier2, delete them
+            8. Create parent on supplier1, delete it and parent-two children on supplier2
+            9. Create parent-two children on supplier1 and parent-child on supplier2, delete them
             10. Create three subsets inside existing container entry, applying only part of changes on m2
             11. Create more combinations of the subset with parent-child on m1 and parent on m2
             12. Delete container on m1, modify user1 on m1, create parent on m2 and modify user2 on m2
             13. Resume replication
-            14. Check that the entries on both masters are the same and replication is working
+            14. Check that the entries on both suppliers are the same and replication is working
         :expectedresults:
             1. It should pass
             2. It should pass
@@ -584,8 +584,8 @@ class TestTwoMasters:
 
         pytest.xfail("Issue 49591 - work in progress")
 
-        M1 = topology_m2.ms["master1"]
-        M2 = topology_m2.ms["master2"]
+        M1 = topology_m2.ms["supplier1"]
+        M2 = topology_m2.ms["supplier2"]
         repl = ReplicationManager(SUFFIX)
         test_users_m1 = UserAccounts(M1, base_m2.dn, rdn=None)
         test_users_m2 = UserAccounts(M2, base_m2.dn, rdn=None)
@@ -601,25 +601,25 @@ class TestTwoMasters:
 
         topology_m2.pause_all_replicas()
 
-        log.info("Create parent-child on master2 and master1")
+        log.info("Create parent-child on supplier2 and supplier1")
         _create_container(M2, base_m2.dn, 'p0', sleep=True)
         cont_p = _create_container(M1, base_m2.dn, 'p0', sleep=True)
         _create_container(M1, cont_p.dn, 'c0', sleep=True)
         _create_container(M2, cont_p.dn, 'c0', sleep=True)
 
-        log.info("Create parent-child on master1 and master2")
+        log.info("Create parent-child on supplier1 and supplier2")
         cont_p = _create_container(M1, base_m2.dn, 'p1', sleep=True)
         _create_container(M2, base_m2.dn, 'p1', sleep=True)
         _create_container(M1, cont_p.dn, 'c1', sleep=True)
         _create_container(M2, cont_p.dn, 'c1', sleep=True)
 
-        log.info("Create parent-child on master1 and master2 different child rdn")
+        log.info("Create parent-child on supplier1 and supplier2 different child rdn")
         cont_p = _create_container(M1, base_m2.dn, 'p2', sleep=True)
         _create_container(M2, base_m2.dn, 'p2', sleep=True)
         _create_container(M1, cont_p.dn, 'c2', sleep=True)
         _create_container(M2, cont_p.dn, 'c3', sleep=True)
 
-        log.info("Create parent-child on master1 and delete parent on master2")
+        log.info("Create parent-child on supplier1 and delete parent on supplier2")
         cont_num = 0
         cont_p_m1 = _create_container(M1, cont_list[cont_num].dn, 'p0', sleep=True)
         cont_p_m2 = _create_container(M2, cont_list[cont_num].dn, 'p0', sleep=True)
@@ -632,7 +632,7 @@ class TestTwoMasters:
         _create_container(M1, cont_p_m1.dn, 'c0', sleep=True)
         _delete_container(cont_p_m2, sleep=True)
 
-        log.info("Create parent on master1, delete it and parent-child on master2, delete them")
+        log.info("Create parent on supplier1, delete it and parent-child on supplier2, delete them")
         cont_num += 1
         cont_p_m1 = _create_container(M1, cont_list[cont_num].dn, 'p0')
         _delete_container(cont_p_m1, sleep=True)
@@ -651,7 +651,7 @@ class TestTwoMasters:
         cont_p_m1 = _create_container(M1, cont_list[cont_num].dn, 'p0')
         _delete_container(cont_p_m1)
 
-        log.info("Create parent on master1, delete it and parent-two children on master2")
+        log.info("Create parent on supplier1, delete it and parent-two children on supplier2")
         cont_num += 1
         cont_p_m1 = _create_container(M1, cont_list[cont_num].dn, 'p0')
         _delete_container(cont_p_m1, sleep=True)
@@ -668,7 +668,7 @@ class TestTwoMasters:
         cont_p_m1 = _create_container(M1, cont_list[cont_num].dn, 'p0')
         _delete_container(cont_p_m1, sleep=True)
 
-        log.info("Create parent-two children on master1 and parent-child on master2, delete them")
+        log.info("Create parent-two children on supplier1 and parent-child on supplier2, delete them")
         cont_num += 1
         cont_p_m2 = _create_container(M2, cont_list[cont_num].dn, 'p0')
         cont_c_m2 = _create_container(M2, cont_p_m2.dn, 'c0')
@@ -748,7 +748,7 @@ class TestTwoMasters:
 
         conts_dns = {}
         for num in range(1, 3):
-            inst = topology_m2.ms["master{}".format(num)]
+            inst = topology_m2.ms["supplier{}".format(num)]
             conts_dns[inst.serverid] = []
             conts = nsContainers(inst, base_m2.dn)
             for cont in conts.list():
@@ -770,7 +770,7 @@ class TestTwoMasters:
            MODRDN and MOD_REPL its RDN values are the same on both servers
 
         :id: 225b3522-8ed7-4256-96f9-5fab9b7044a5
-        :setup: Two master replication,
+        :setup: Two supplier replication,
                 audit log, error log for replica and access log for internal
         :steps:
             1. Create a test entry uid=user_test_1000,...
@@ -794,8 +794,8 @@ class TestTwoMasters:
             9. It should pass
         """
 
-        M1 = topology_m2.ms["master1"]
-        M2 = topology_m2.ms["master2"]
+        M1 = topology_m2.ms["supplier1"]
+        M2 = topology_m2.ms["supplier2"]
 
         # add a test user
         test_users_m1 = UserAccounts(M1, base_m2.dn, rdn=None)
@@ -854,7 +854,7 @@ class TestTwoMasters:
            MODRDN and MOD_REPL its RDN values are the same on both servers
 
         :id: c38ae613-5d1e-47cf-b051-c7284e64b817
-        :setup: Two master replication, test container for entries, enable plugin logging,
+        :setup: Two supplier replication, test container for entries, enable plugin logging,
                 audit log, error log for replica and access log for internal
         :steps:
             1. Create a test entry uid=user_test_1000,...
@@ -878,8 +878,8 @@ class TestTwoMasters:
             9. It should pass
         """
 
-        M1 = topology_m2.ms["master1"]
-        M2 = topology_m2.ms["master2"]
+        M1 = topology_m2.ms["supplier1"]
+        M2 = topology_m2.ms["supplier2"]
 
         # add a test user with a dummy 'uid' extra value because modrdn removes
         # uid that conflict with 'account' objectclass
@@ -937,13 +937,13 @@ class TestTwoMasters:
             log.info("Check M2.uid %s is also on M1" % val)
             assert(val in final_user_m1.get_attr_vals_utf8('employeenumber'))
 
-class TestThreeMasters:
+class TestThreeSuppliers:
     def test_nested_entries(self, topology_m3, base_m3):
         """Check that conflict properly resolved for operations
         with nested entries with children
 
         :id: 77f09b18-03d1-45da-940b-1ad2c2908eb6
-        :setup: Three master replication, test container for entries, enable plugin logging,
+        :setup: Three supplier replication, test container for entries, enable plugin logging,
                 audit log, error log for replica and access log for internal
         :steps:
             1. Add 15 containers to m1 and wait for replication to happen
@@ -954,7 +954,7 @@ class TestThreeMasters:
                on m2 - delete one parent and create a child
             6. Test a few more parent-child combinations with three instances
             7. Resume replication
-            8. Check that the entries on both masters are the same and replication is working
+            8. Check that the entries on both suppliers are the same and replication is working
         :expectedresults:
             1. It should pass
             2. It should pass
@@ -968,9 +968,9 @@ class TestThreeMasters:
 
         pytest.xfail("Issue 49591 - work in progress")
 
-        M1 = topology_m3.ms["master1"]
-        M2 = topology_m3.ms["master2"]
-        M3 = topology_m3.ms["master3"]
+        M1 = topology_m3.ms["supplier1"]
+        M2 = topology_m3.ms["supplier2"]
+        M3 = topology_m3.ms["supplier3"]
         repl = ReplicationManager(SUFFIX)
 
         cont_list = []
@@ -1031,7 +1031,7 @@ class TestThreeMasters:
 
         conts_dns = {}
         for num in range(1, 4):
-            inst = topology_m3.ms["master{}".format(num)]
+            inst = topology_m3.ms["supplier{}".format(num)]
             conts_dns[inst.serverid] = []
             conts = nsContainers(inst, base_m3.dn)
             for cont in conts.list():

+ 4 - 4
dirsrvtests/tests/suites/replication/conftest.py

@@ -23,9 +23,9 @@ log = logging.getLogger(__name__)
 # Redefine some fixtures so we can use them with class scope
 @pytest.fixture(scope="class")
 def topology_m2(request):
-    """Create Replication Deployment with two masters"""
+    """Create Replication Deployment with two suppliers"""
 
-    topology = create_topology({ReplicaRole.MASTER: 2})
+    topology = create_topology({ReplicaRole.SUPPLIER: 2})
 
     def fin():
         if DEBUGGING:
@@ -39,9 +39,9 @@ def topology_m2(request):
 
 @pytest.fixture(scope="class")
 def topology_m3(request):
-    """Create Replication Deployment with three masters"""
+    """Create Replication Deployment with three suppliers"""
 
-    topology = create_topology({ReplicaRole.MASTER: 3})
+    topology = create_topology({ReplicaRole.SUPPLIER: 3})
 
     def fin():
         if DEBUGGING:

+ 9 - 9
dirsrvtests/tests/suites/replication/encryption_cl5_test.py

@@ -30,18 +30,18 @@ log = logging.getLogger(__name__)
 
 @pytest.fixture(scope="module")
 def topology_with_tls(topology_m2):
-    """Enable TLS on all masters"""
+    """Enable TLS on all suppliers"""
 
     [i.enable_tls() for i in topology_m2]
 
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    repl.test_replication(topology_m2.ms['master1'], topology_m2.ms['master2'])
+    repl.test_replication(topology_m2.ms['supplier1'], topology_m2.ms['supplier2'])
 
     return topology_m2
 
 
 def _enable_changelog_encryption(inst, encrypt_algorithm):
-    """Configure changelog encryption for master"""
+    """Configure changelog encryption for supplier"""
 
     dse_ldif = DSEldif(inst)
     log.info('Configuring changelog encryption:{} for: {}'.format(inst.serverid, encrypt_algorithm))
@@ -91,14 +91,14 @@ def test_algorithm_unhashed(topology_with_tls):
 
     :id: b7a37bf8-4b2e-4dbd-9891-70117d67558c
     :parametrized: yes
-    :setup: Replication with two masters and SSL configured.
-    :steps: 1. Enable changelog encrytion on master1
-            2. Add a user to master1/master2
+    :setup: Replication with two suppliers and SSL configured.
+    :steps: 1. Enable changelog encrytion on supplier1
+            2. Add a user to supplier1/supplier2
             3. Run dbscan -f on m1 to check unhashed#user#password
                attribute is encrypted.
             4. Run dbscan -f on m2 to check unhashed#user#password
                attribute is in cleartext.
-            5. Modify password in master2/master1
+            5. Modify password in supplier2/supplier1
             6. Run dbscan -f on m1 to check unhashed#user#password
                attribute is encrypted.
             7. Run dbscan -f on m2 to check unhashed#user#password
@@ -113,8 +113,8 @@ def test_algorithm_unhashed(topology_with_tls):
             7. It should pass
     """
     encryption = 'AES'
-    m1 = topology_with_tls.ms['master1']
-    m2 = topology_with_tls.ms['master2']
+    m1 = topology_with_tls.ms['supplier1']
+    m2 = topology_with_tls.ms['supplier2']
     m1.config.set('nsslapd-unhashed-pw-switch', 'on')
     m2.config.set('nsslapd-unhashed-pw-switch', 'on')
     test_passw = 'm2Test199'

+ 6 - 6
dirsrvtests/tests/suites/replication/multiple_changelogs_test.py

@@ -54,7 +54,7 @@ def test_multiple_changelogs(topo):
     changelog.
 
     :id: eafcdb57-4ea2-4887-a0a8-9e4d295f4f4d
-    :setup: Master Instance, Consumer Instance
+    :setup: Supplier Instance, Consumer Instance
     :steps:
         1. Create s second suffix
         2. Enable replication for second backend
@@ -66,7 +66,7 @@ def test_multiple_changelogs(topo):
         2. Success
         3. Success
     """
-    supplier = topo.ms['master1']
+    supplier = topo.ms['supplier1']
     consumer = topo.cs['consumer1']
 
     # Create second suffix dc=second_backend on both replicas
@@ -79,7 +79,7 @@ def test_multiple_changelogs(topo):
 
     # Setup replication for second suffix
     repl = ReplicationManager(SECOND_SUFFIX)
-    repl.create_first_master(supplier)
+    repl.create_first_supplier(supplier)
     repl.join_consumer(supplier, consumer)
 
     # Test replication works for each backend
@@ -94,7 +94,7 @@ def test_multiple_changelogs_export_import(topo):
     """Test that we can export and import the replication changelog
 
     :id: b74fcaaf-a13f-4ee0-98f9-248b281f8700
-    :setup: Master Instance, Consumer Instance
+    :setup: Supplier Instance, Consumer Instance
     :steps:
         1. Create s second suffix
         2. Enable replication for second backend
@@ -110,7 +110,7 @@ def test_multiple_changelogs_export_import(topo):
         5. Success
     """
     SECOND_SUFFIX = 'dc=second_suffix'
-    supplier = topo.ms['master1']
+    supplier = topo.ms['supplier1']
     consumer = topo.cs['consumer1']
     supplier.config.set('nsslapd-errorlog-level', '0')
     # Create second suffix dc=second_backend on both replicas
@@ -127,7 +127,7 @@ def test_multiple_changelogs_export_import(topo):
     # Setup replication for second suffix
     try:
         repl = ReplicationManager(SECOND_SUFFIX)
-        repl.create_first_master(supplier)
+        repl.create_first_supplier(supplier)
         repl.join_consumer(supplier, consumer)
     except ldap.ALREADY_EXISTS:
         pass

+ 8 - 8
dirsrvtests/tests/suites/replication/regression_i2_test.py

@@ -43,9 +43,9 @@ def test_special_symbol_replica_agreement(topo_i2):
     :setup: two standalone instance
     :steps:
         1. Create and Enable Replication on standalone2 and role as consumer
-        2. Create and Enable Replication on standalone1 and role as master
+        2. Create and Enable Replication on standalone1 and role as supplier
         3. Create a Replication agreement starts with "cn=->..."
-        4. Perform an upgrade operation over the master
+        4. Perform an upgrade operation over the supplier
         5. Check if the agreement is still present or not.
     :expectedresults:
         1. It should be successful
@@ -55,11 +55,11 @@ def test_special_symbol_replica_agreement(topo_i2):
         5. It should be successful
     """
 
-    master = topo_i2.ins["standalone1"]
+    supplier = topo_i2.ins["standalone1"]
     consumer = topo_i2.ins["standalone2"]
     consumer.replica.enableReplication(suffix=DEFAULT_SUFFIX, role=ReplicaRole.CONSUMER, replicaId=CONSUMER_REPLICAID)
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    repl.create_first_master(master)
+    repl.create_first_supplier(supplier)
 
     properties = {RA_NAME: '-\\3meTo_{}:{}'.format(consumer.host, str(consumer.port)),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
@@ -67,16 +67,16 @@ def test_special_symbol_replica_agreement(topo_i2):
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
 
-    master.agreement.create(suffix=SUFFIX,
+    supplier.agreement.create(suffix=SUFFIX,
                             host=consumer.host,
                             port=consumer.port,
                             properties=properties)
 
-    master.agreement.init(SUFFIX, consumer.host, consumer.port)
+    supplier.agreement.init(SUFFIX, consumer.host, consumer.port)
 
-    replica_server = Replicas(master).get(DEFAULT_SUFFIX)
+    replica_server = Replicas(supplier).get(DEFAULT_SUFFIX)
 
-    master.upgrade('online')
+    supplier.upgrade('online')
 
     agmt = replica_server.get_agreements().list()[0]
 

+ 88 - 88
dirsrvtests/tests/suites/replication/regression_m2_test.py

@@ -133,9 +133,9 @@ def _remove_replication_data(ldif_file):
 
 @pytest.fixture(scope="function")
 def topo_with_sigkill(request):
-    """Create Replication Deployment with two masters"""
+    """Create Replication Deployment with two suppliers"""
 
-    topology = create_topology({ReplicaRole.MASTER: 2})
+    topology = create_topology({ReplicaRole.SUPPLIER: 2})
 
     def _kill_ns_slapd(inst):
         pid = str(pid_from_file(inst.ds_paths.pid_file))
@@ -161,7 +161,7 @@ def create_entry(topo_m2, request):
     """Add test entry using UserAccounts"""
 
     log.info('Adding a test entry user')
-    users = UserAccounts(topo_m2.ms["master1"], DEFAULT_SUFFIX)
+    users = UserAccounts(topo_m2.ms["supplier1"], DEFAULT_SUFFIX)
     tuser = users.ensure_state(properties=TEST_USER_PROPERTIES)
     return tuser
 
@@ -218,32 +218,32 @@ def test_double_delete(topo_m2, create_entry):
     """Check that double delete of the entry doesn't crash server
 
     :id: 3496c82d-636a-48c9-973c-2455b12164cc
-    :setup: Two masters replication setup, a test entry
+    :setup: Two suppliers replication setup, a test entry
     :steps:
-        1. Delete the entry on the first master
-        2. Delete the entry on the second master
+        1. Delete the entry on the first supplier
+        2. Delete the entry on the second supplier
         3. Check that server is alive
     :expectedresults:
-        1. Entry should be successfully deleted from first master
+        1. Entry should be successfully deleted from first supplier
         2. Entry should be successfully deleted from second aster
         3. Server should me alive
     """
 
-    m1 = topo_m2.ms["master1"]
-    m2 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier1"]
+    m2 = topo_m2.ms["supplier2"]
 
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    repl.disable_to_master(m1, [m2])
-    repl.disable_to_master(m2, [m1])
+    repl.disable_to_supplier(m1, [m2])
+    repl.disable_to_supplier(m2, [m1])
 
-    log.info('Deleting entry {} from master1'.format(create_entry.dn))
-    topo_m2.ms["master1"].delete_s(create_entry.dn)
+    log.info('Deleting entry {} from supplier1'.format(create_entry.dn))
+    topo_m2.ms["supplier1"].delete_s(create_entry.dn)
 
-    log.info('Deleting entry {} from master2'.format(create_entry.dn))
-    topo_m2.ms["master2"].delete_s(create_entry.dn)
+    log.info('Deleting entry {} from supplier2'.format(create_entry.dn))
+    topo_m2.ms["supplier2"].delete_s(create_entry.dn)
 
-    repl.enable_to_master(m2, [m1])
-    repl.enable_to_master(m1, [m2])
+    repl.enable_to_supplier(m2, [m1])
+    repl.enable_to_supplier(m1, [m2])
 
     repl.test_replication(m1, m2)
     repl.test_replication(m2, m1)
@@ -254,7 +254,7 @@ def test_repl_modrdn(topo_m2):
     """Test that replicated MODRDN does not break replication
 
     :id: a3e17698-9eb4-41e0-b537-8724b9915fa6
-    :setup: Two masters replication setup
+    :setup: Two suppliers replication setup
     :steps:
         1. Add 3 test OrganizationalUnits A, B and C
         2. Add 1 test user under OU=A
@@ -263,7 +263,7 @@ def test_repl_modrdn(topo_m2):
         5. Apply modrdn to M1 - move test user from OU A -> C
         6. Apply modrdn on M2 - move test user from OU B -> C
         7. Start Replication
-        8. Check that there should be only one test entry under ou=C on both masters
+        8. Check that there should be only one test entry under ou=C on both suppliers
         9. Check that the replication is working fine both ways M1 <-> M2
     :expectedresults:
         1. This should pass
@@ -277,13 +277,13 @@ def test_repl_modrdn(topo_m2):
         9. This should pass
     """
 
-    master1 = topo_m2.ms["master1"]
-    master2 = topo_m2.ms["master2"]
+    supplier1 = topo_m2.ms["supplier1"]
+    supplier2 = topo_m2.ms["supplier2"]
 
     repl = ReplicationManager(DEFAULT_SUFFIX)
 
     log.info("Add test entries - Add 3 OUs and 2 same users under 2 different OUs")
-    OUs = OrganizationalUnits(master1, DEFAULT_SUFFIX)
+    OUs = OrganizationalUnits(supplier1, DEFAULT_SUFFIX)
     OU_A = OUs.create(properties={
         'ou': 'A',
         'description': 'A',
@@ -297,50 +297,50 @@ def test_repl_modrdn(topo_m2):
         'description': 'C',
     })
 
-    users = UserAccounts(master1, DEFAULT_SUFFIX, rdn='ou={}'.format(OU_A.rdn))
+    users = UserAccounts(supplier1, DEFAULT_SUFFIX, rdn='ou={}'.format(OU_A.rdn))
     tuser_A = users.create(properties=TEST_USER_PROPERTIES)
 
-    users = UserAccounts(master1, DEFAULT_SUFFIX, rdn='ou={}'.format(OU_B.rdn))
+    users = UserAccounts(supplier1, DEFAULT_SUFFIX, rdn='ou={}'.format(OU_B.rdn))
     tuser_B = users.create(properties=TEST_USER_PROPERTIES)
 
-    repl.test_replication(master1, master2)
-    repl.test_replication(master2, master1)
+    repl.test_replication(supplier1, supplier2)
+    repl.test_replication(supplier2, supplier1)
 
     log.info("Stop Replication")
     topo_m2.pause_all_replicas()
 
     log.info("Apply modrdn to M1 - move test user from OU A -> C")
-    master1.rename_s(tuser_A.dn, 'uid=testuser1', newsuperior=OU_C.dn, delold=1)
+    supplier1.rename_s(tuser_A.dn, 'uid=testuser1', newsuperior=OU_C.dn, delold=1)
 
     log.info("Apply modrdn on M2 - move test user from OU B -> C")
-    master2.rename_s(tuser_B.dn, 'uid=testuser1', newsuperior=OU_C.dn, delold=1)
+    supplier2.rename_s(tuser_B.dn, 'uid=testuser1', newsuperior=OU_C.dn, delold=1)
 
     log.info("Start Replication")
     topo_m2.resume_all_replicas()
 
     log.info("Wait for sometime for repl to resume")
-    repl.test_replication(master1, master2)
-    repl.test_replication(master2, master1)
+    repl.test_replication(supplier1, supplier2)
+    repl.test_replication(supplier2, supplier1)
 
-    log.info("Check that there should be only one test entry under ou=C on both masters")
-    users = UserAccounts(master1, DEFAULT_SUFFIX, rdn='ou={}'.format(OU_C.rdn))
+    log.info("Check that there should be only one test entry under ou=C on both suppliers")
+    users = UserAccounts(supplier1, DEFAULT_SUFFIX, rdn='ou={}'.format(OU_C.rdn))
     assert len(users.list()) == 1
 
-    users = UserAccounts(master2, DEFAULT_SUFFIX, rdn='ou={}'.format(OU_C.rdn))
+    users = UserAccounts(supplier2, DEFAULT_SUFFIX, rdn='ou={}'.format(OU_C.rdn))
     assert len(users.list()) == 1
 
     log.info("Check that the replication is working fine both ways, M1 <-> M2")
-    repl.test_replication(master1, master2)
-    repl.test_replication(master2, master1)
+    repl.test_replication(supplier1, supplier2)
+    repl.test_replication(supplier2, supplier1)
 
 
 def test_password_repl_error(topo_m2, create_entry):
     """Check that error about userpassword replication is properly logged
 
     :id: 714130ff-e4f0-4633-9def-c1f4b24abfef
-    :setup: Four masters replication setup, a test entry
+    :setup: Four suppliers replication setup, a test entry
     :steps:
-        1. Change userpassword on the first master
+        1. Change userpassword on the first supplier
         2. Restart the servers to flush the logs
         3. Check the error log for an replication error
     :expectedresults:
@@ -349,8 +349,8 @@ def test_password_repl_error(topo_m2, create_entry):
         3. There should be no replication errors in the error log
     """
 
-    m1 = topo_m2.ms["master1"]
-    m2 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier1"]
+    m2 = topo_m2.ms["supplier2"]
     TEST_ENTRY_NEW_PASS = 'new_pass'
 
     log.info('Clean the error log')
@@ -359,7 +359,7 @@ def test_password_repl_error(topo_m2, create_entry):
     log.info('Set replication loglevel')
     m2.config.loglevel((ErrorLog.REPLICA,))
 
-    log.info('Modifying entry {} - change userpassword on master 1'.format(create_entry.dn))
+    log.info('Modifying entry {} - change userpassword on supplier 1'.format(create_entry.dn))
 
     create_entry.set('userpassword', TEST_ENTRY_NEW_PASS)
 
@@ -368,10 +368,10 @@ def test_password_repl_error(topo_m2, create_entry):
 
     log.info('Restart the servers to flush the logs')
     for num in range(1, 3):
-        topo_m2.ms["master{}".format(num)].restart()
+        topo_m2.ms["supplier{}".format(num)].restart()
 
     try:
-        log.info('Check that password works on master 2')
+        log.info('Check that password works on supplier 2')
         create_entry_m2 = UserAccount(m2, create_entry.dn)
         create_entry_m2.bind(TEST_ENTRY_NEW_PASS)
 
@@ -386,7 +386,7 @@ def test_invalid_agmt(topo_m2):
     """Test adding that an invalid agreement is properly rejected and does not crash the server
 
     :id: 6c3b2a7e-edcd-4327-a003-6bd878ff722b
-    :setup: Four masters replication setup
+    :setup: Four suppliers replication setup
     :steps:
         1. Add invalid agreement (nsds5ReplicaEnabled set to invalid value)
         2. Verify the server is still running
@@ -395,8 +395,8 @@ def test_invalid_agmt(topo_m2):
         2. Server should be still running
     """
 
-    m1 = topo_m2.ms["master1"]
-    m2 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier1"]
+    m2 = topo_m2.ms["supplier2"]
 
     repl = ReplicationManager(DEFAULT_SUFFIX)
 
@@ -430,7 +430,7 @@ def test_fetch_bindDnGroup(topo_m2):
     """Check the bindDNGroup is fetched on first replication session
 
     :id: 5f1b1f59-6744-4260-b091-c82d22130025
-    :setup: 2 Master Instances
+    :setup: 2 Supplier Instances
     :steps:
         1. Create a replication bound user and group, but the user *not* member of the group
         2. Check that replication is working
@@ -457,9 +457,9 @@ def test_fetch_bindDnGroup(topo_m2):
     # Topology for suites are predefined in lib389/topologies.py.
 
     # If you need host, port or any other data about instance,
-    # Please, use the instance object attributes for that (for example, topo.ms["master1"].serverid)
-    M1 = topo_m2.ms['master1']
-    M2 = topo_m2.ms['master2']
+    # Please, use the instance object attributes for that (for example, topo.ms["supplier1"].serverid)
+    M1 = topo_m2.ms['supplier1']
+    M2 = topo_m2.ms['supplier2']
 
     # Enable replication log level. Not really necessary
     M1.modify_s('cn=config', [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', b'8192')])
@@ -579,7 +579,7 @@ def test_plugin_bind_dn_tracking_and_replication(topo_m2):
         access control and reconfiguring replication/repl agmt.
 
     :id: dd689d03-69b8-4bf9-a06e-2acd19d5e2c9
-    :setup: 2 master topology
+    :setup: 2 supplier topology
     :steps:
         1. Turn on plugin binddn tracking
         2. Add some users
@@ -594,7 +594,7 @@ def test_plugin_bind_dn_tracking_and_replication(topo_m2):
         5. Success
     """
 
-    m1 = topo_m2.ms["master1"]
+    m1 = topo_m2.ms["supplier1"]
 
     # Turn on bind dn tracking
     m1.config.set('nsslapd-plugin-binddn-tracking', 'on')
@@ -632,7 +632,7 @@ def test_moving_entry_make_online_init_fail(topo_m2):
     Moving an entry could make the online init fail
 
     :id: e3895be7-884a-4e9f-80e3-24e9a5167c9e
-    :setup: Two masters replication setup
+    :setup: Two suppliers replication setup
     :steps:
          1. Generate DIT_0
          2. Generate password policy for DIT_0
@@ -643,7 +643,7 @@ def test_moving_entry_make_online_init_fail(topo_m2):
          7. Move 'ou=OU0,dc=example,dc=com' to DIT_1
          8. Move idx % 2 == 1 users to 'ou=OU0,ou=OU0,ou=OU1,dc=example,dc=com'
          9. Init replicas
-         10. Number of entries should match on both masters
+         10. Number of entries should match on both suppliers
 
     :expectedresults:
          1. Success
@@ -658,8 +658,8 @@ def test_moving_entry_make_online_init_fail(topo_m2):
          10. Success
     """
 
-    M1 = topo_m2.ms["master1"]
-    M2 = topo_m2.ms["master2"]
+    M1 = topo_m2.ms["supplier1"]
+    M2 = topo_m2.ms["supplier2"]
 
     log.info("Generating DIT_0")
     idx = 0
@@ -743,21 +743,21 @@ def get_keepalive_entries(instance, replica):
 
 
 def verify_keepalive_entries(topo, expected):
-    # Check that keep alive entries exists (or not exists) for every masters on every masters
-    # Note: The testing method is quite basic: counting that there is one keepalive entry per master.
+    # Check that keep alive entries exists (or not exists) for every suppliers on every suppliers
+    # Note: The testing method is quite basic: counting that there is one keepalive entry per supplier.
     # that is ok for simple test cases like test_online_init_should_create_keepalive_entries but
-    # not for the general case as keep alive associated with no more existing master may exists
-    # (for example after: db2ldif / demote a master / ldif2db / init other masters)
+    # not for the general case as keep alive associated with no more existing supplier may exists
+    # (for example after: db2ldif / demote a supplier / ldif2db / init other suppliers)
     # ==> if the function is somehow pushed in lib389, a check better than simply counting the entries
     # should be done.
-    for masterId in topo.ms:
-        master = topo.ms[masterId]
-        for replica in Replicas(master).list():
-            if (replica.get_role() != ReplicaRole.MASTER):
+    for supplierId in topo.ms:
+        supplier = topo.ms[supplierId]
+        for replica in Replicas(supplier).list():
+            if (replica.get_role() != ReplicaRole.SUPPLIER):
                 continue
-            replica_info = f'master: {masterId} RID: {replica.get_rid()} suffix: {replica.get_suffix()}'
+            replica_info = f'supplier: {supplierId} RID: {replica.get_rid()} suffix: {replica.get_suffix()}'
             log.debug(f'Checking keepAliveEntries on {replica_info}')
-            keepaliveEntries = get_keepalive_entries(master, replica);
+            keepaliveEntries = get_keepalive_entries(supplier, replica);
             expectedCount = len(topo.ms) if expected else 0
             foundCount = len(keepaliveEntries)
             if (foundCount == expectedCount):
@@ -769,28 +769,28 @@ def verify_keepalive_entries(topo, expected):
 
 
 def test_online_init_should_create_keepalive_entries(topo_m2):
-    """Check that keep alive entries are created when initializinf a master from another one
+    """Check that keep alive entries are created when initializinf a supplier from another one
 
     :id: d5940e71-d18a-4b71-aaf7-b9185361fffe
-    :setup: Two masters replication setup
+    :setup: Two suppliers replication setup
     :steps:
         1. Generate ldif without replication data
-        2  Init both masters from that ldif
+        2  Init both suppliers from that ldif
         3  Check that keep alive entries does not exists
-        4  Perform on line init of master2 from master1
+        4  Perform on line init of supplier2 from supplier1
         5  Check that keep alive entries exists
     :expectedresults:
         1. No error while generating ldif
         2. No error while importing the ldif file
-        3. No keepalive entrie should exists on any masters
-        4. No error while initializing master2
-        5. All keepalive entries should exist on every masters
+        3. No keepalive entrie should exists on any suppliers
+        4. No error while initializing supplier2
+        5. All keepalive entries should exist on every suppliers
 
     """
 
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    m1 = topo_m2.ms["master1"]
-    m2 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier1"]
+    m2 = topo_m2.ms["supplier2"]
     # Step 1: Generate ldif without replication data
     m1.stop()
     m2.stop()
@@ -801,32 +801,32 @@ def test_online_init_should_create_keepalive_entries(topo_m2):
     # Remove replication metadata that are still in the ldif
     _remove_replication_data(ldif_file)
 
-    # Step 2: Init both masters from that ldif
+    # Step 2: Init both suppliers from that ldif
     m1.ldif2db(DEFAULT_BENAME, None, None, None, ldif_file)
     m2.ldif2db(DEFAULT_BENAME, None, None, None, ldif_file)
     m1.start()
     m2.start()
 
     """ Replica state is now as if CLI setup has been done using:
-        dsconf master1 replication enable --suffix "${SUFFIX}" --role master
-        dsconf master2 replication enable --suffix "${SUFFIX}" --role master
-        dsconf master1 replication create-manager --name "${REPLICATION_MANAGER_NAME}" --passwd "${REPLICATION_MANAGER_PASSWORD}"
-        dsconf master2 replication create-manager --name "${REPLICATION_MANAGER_NAME}" --passwd "${REPLICATION_MANAGER_PASSWORD}"
-        dsconf master1 repl-agmt create --suffix "${SUFFIX}"
-        dsconf master2 repl-agmt create --suffix "${SUFFIX}"
+        dsconf supplier1 replication enable --suffix "${SUFFIX}" --role supplier
+        dsconf supplier2 replication enable --suffix "${SUFFIX}" --role supplier
+        dsconf supplier1 replication create-manager --name "${REPLICATION_MANAGER_NAME}" --passwd "${REPLICATION_MANAGER_PASSWORD}"
+        dsconf supplier2 replication create-manager --name "${REPLICATION_MANAGER_NAME}" --passwd "${REPLICATION_MANAGER_PASSWORD}"
+        dsconf supplier1 repl-agmt create --suffix "${SUFFIX}"
+        dsconf supplier2 repl-agmt create --suffix "${SUFFIX}"
     """
 
-    # Step 3: No keepalive entrie should exists on any masters
+    # Step 3: No keepalive entrie should exists on any suppliers
     verify_keepalive_entries(topo_m2, False)
 
-    # Step 4: Perform on line init of master2 from master1
+    # Step 4: Perform on line init of supplier2 from supplier1
     agmt = Agreements(m1).list()[0]
     agmt.begin_reinit()
     (done, error) = agmt.wait_reinit()
     assert done is True
     assert error is False
 
-    # Step 5: All keepalive entries should exists on every masters
+    # Step 5: All keepalive entries should exists on every suppliers
     #  Verify the keep alive entry once replication is in sync
     # (that is the step that fails when bug is not fixed)
     repl.wait_for_ruv(m2,m1)
@@ -840,7 +840,7 @@ def test_online_reinit_may_hang(topo_with_sigkill):
        entry of the DB is RUV entry instead of the suffix
 
     :id: cded6afa-66c0-4c65-9651-993ba3f7a49c
-    :setup: 2 Master Instances
+    :setup: 2 Supplier Instances
     :steps:
         1. Export the database
         2. Move RUV entry to the top in the ldif file
@@ -852,10 +852,10 @@ def test_online_reinit_may_hang(topo_with_sigkill):
         3. Import should be successful
         4. Server should not hang and consume 100% CPU
     """
-    M1 = topo_with_sigkill.ms["master1"]
-    M2 = topo_with_sigkill.ms["master2"]
+    M1 = topo_with_sigkill.ms["supplier1"]
+    M2 = topo_with_sigkill.ms["supplier2"]
     M1.stop()
-    ldif_file = '%s/master1.ldif' % M1.get_ldif_dir()
+    ldif_file = '%s/supplier1.ldif' % M1.get_ldif_dir()
     M1.db2ldif(bename=DEFAULT_BENAME, suffixes=[DEFAULT_SUFFIX],
                excludeSuffixes=None, repl_data=True,
                outputfile=ldif_file, encrypt=False)

+ 35 - 35
dirsrvtests/tests/suites/replication/regression_m2c2_test.py

@@ -49,37 +49,37 @@ def test_ruv_url_not_added_if_different_uuid(topo_m2c2):
     """Check that RUV url is not updated if RUV generation uuid are different
 
     :id: 7cc30a4e-0ffd-4758-8f00-e500279af344
-    :setup: Two masters + two consumers replication setup
+    :setup: Two suppliers + two consumers replication setup
     :steps:
         1. Generate ldif without replication data
-        2. Init both masters from that ldif
+        2. Init both suppliers from that ldif
              (to clear the ruvs and generates different generation uuid)
-        3. Perform on line init from master1 to consumer1
-               and from master2 to consumer2
-        4. Perform update on both masters
+        3. Perform on line init from supplier1 to consumer1
+               and from supplier2 to consumer2
+        4. Perform update on both suppliers
         5. Check that c1 RUV does not contains URL towards m2
         6. Check that c2 RUV does contains URL towards m2
-        7. Perform on line init from master1 to master2
-        8. Perform update on master2
+        7. Perform on line init from supplier1 to supplier2
+        8. Perform update on supplier2
         9. Check that c1 RUV does contains URL towards m2
     :expectedresults:
         1. No error while generating ldif
         2. No error while importing the ldif file
         3. No error and Initialization done.
         4. No error
-        5. master2 replicaid should not be in the consumer1 RUV
-        6. master2 replicaid should be in the consumer2 RUV
+        5. supplier2 replicaid should not be in the consumer1 RUV
+        6. supplier2 replicaid should be in the consumer2 RUV
         7. No error and Initialization done.
         8. No error
-        9. master2 replicaid should be in the consumer1 RUV
+        9. supplier2 replicaid should be in the consumer1 RUV
 
     """
 
     # Variables initialization
     repl = ReplicationManager(DEFAULT_SUFFIX)
 
-    m1 = topo_m2c2.ms["master1"]
-    m2 = topo_m2c2.ms["master2"]
+    m1 = topo_m2c2.ms["supplier1"]
+    m2 = topo_m2c2.ms["supplier2"]
     c1 = topo_m2c2.cs["consumer1"]
     c2 = topo_m2c2.cs["consumer2"]
 
@@ -110,14 +110,14 @@ def test_ruv_url_not_added_if_different_uuid(topo_m2c2):
     # Remove replication metadata that are still in the ldif
     # _remove_replication_data(ldif_file)
 
-    # Step 2: Init both masters from that ldif
+    # Step 2: Init both suppliers from that ldif
     m1.ldif2db(DEFAULT_BENAME, None, None, None, ldif_file)
     m2.ldif2db(DEFAULT_BENAME, None, None, None, ldif_file)
     m1.start()
     m2.start()
 
-    # Step 3: Perform on line init from master1 to consumer1
-    #          and from master2 to consumer2
+    # Step 3: Perform on line init from supplier1 to consumer1
+    #          and from supplier2 to consumer2
     m1_c1.begin_reinit()
     m2_c2.begin_reinit()
     (done, error) = m1_c1.wait_reinit()
@@ -127,7 +127,7 @@ def test_ruv_url_not_added_if_different_uuid(topo_m2c2):
     assert done is True
     assert error is False
 
-    # Step 4: Perform update on both masters
+    # Step 4: Perform update on both suppliers
     repl.test_replication(m1, c1)
     repl.test_replication(m2, c2)
 
@@ -153,13 +153,13 @@ def test_ruv_url_not_added_if_different_uuid(topo_m2c2):
     else:
         log.debug(f"URL for RID {replica_m2.get_rid()} in RUV is {url}")
 
-    # Step 7: Perform on line init from master1 to master2
+    # Step 7: Perform on line init from supplier1 to supplier2
     m1_m2.begin_reinit()
     (done, error) = m1_m2.wait_reinit()
     assert done is True
     assert error is False
 
-    # Step 8: Perform update on master2
+    # Step 8: Perform update on supplier2
     repl.test_replication(m2, c1)
 
     # Step 9: Check that c1 RUV does contains URL towards m2
@@ -177,19 +177,19 @@ def test_csngen_state_not_updated_if_different_uuid(topo_m2c2):
     """Check that csngen remote offset is not updated if RUV generation uuid are different
 
     :id: 77694b8e-22ae-11eb-89b2-482ae39447e5
-    :setup: Two masters + two consumers replication setup
+    :setup: Two suppliers + two consumers replication setup
     :steps:
         1. Disable m1<->m2 agreement to avoid propagate timeSkew
         2. Generate ldif without replication data
-        3. Increase time skew on master2
-        4. Init both masters from that ldif
+        3. Increase time skew on supplier2
+        4. Init both suppliers from that ldif
              (to clear the ruvs and generates different generation uuid)
-        5. Perform on line init from master1 to consumer1 and master2 to consumer2
-        6. Perform update on both masters
+        5. Perform on line init from supplier1 to consumer1 and supplier2 to consumer2
+        6. Perform update on both suppliers
         7: Check that c1 has no time skew
         8: Check that c2 has time skew
-        9. Init master2 from master1
-        10. Perform update on master2
+        9. Init supplier2 from supplier1
+        10. Perform update on supplier2
         11. Check that c1 has time skew
     :expectedresults:
         1. No error
@@ -209,8 +209,8 @@ def test_csngen_state_not_updated_if_different_uuid(topo_m2c2):
     # Variables initialization
     repl = ReplicationManager(DEFAULT_SUFFIX)
 
-    m1 = topo_m2c2.ms["master1"]
-    m2 = topo_m2c2.ms["master2"]
+    m1 = topo_m2c2.ms["supplier1"]
+    m2 = topo_m2c2.ms["supplier2"]
     c1 = topo_m2c2.cs["consumer1"]
     c2 = topo_m2c2.cs["consumer2"]
 
@@ -245,9 +245,9 @@ def test_csngen_state_not_updated_if_different_uuid(topo_m2c2):
     # Remove replication metadata that are still in the ldif
     # _remove_replication_data(ldif_file)
 
-    # Step 3: Increase time skew on master2
+    # Step 3: Increase time skew on supplier2
     timeSkew = 6*3600
-    # We can modify master2 time skew
+    # We can modify supplier2 time skew
     # But the time skew on the consumer may be smaller
     # depending on when the cnsgen generation time is updated
     # and when first csn get replicated.
@@ -258,14 +258,14 @@ def test_csngen_state_not_updated_if_different_uuid(topo_m2c2):
     timeSkewMargin = 300
     DSEldif(m2)._increaseTimeSkew(DEFAULT_SUFFIX, timeSkew+timeSkewMargin)
 
-    # Step 4: Init both masters from that ldif
+    # Step 4: Init both suppliers from that ldif
     m1.ldif2db(DEFAULT_BENAME, None, None, None, ldif_file)
     m2.ldif2db(DEFAULT_BENAME, None, None, None, ldif_file)
     m1.start()
     m2.start()
 
-    # Step 5: Perform on line init from master1 to consumer1
-    #          and from master2 to consumer2
+    # Step 5: Perform on line init from supplier1 to consumer1
+    #          and from supplier2 to consumer2
     m1_c1.begin_reinit()
     m2_c2.begin_reinit()
     (done, error) = m1_c1.wait_reinit()
@@ -275,7 +275,7 @@ def test_csngen_state_not_updated_if_different_uuid(topo_m2c2):
     assert done is True
     assert error is False
 
-    # Step 6: Perform update on both masters
+    # Step 6: Perform update on both suppliers
     repl.test_replication(m1, c1)
     repl.test_replication(m2, c2)
 
@@ -301,7 +301,7 @@ def test_csngen_state_not_updated_if_different_uuid(topo_m2c2):
         assert False
     c2.start()
 
-    # Step 9: Perform on line init from master1 to master2
+    # Step 9: Perform on line init from supplier1 to supplier2
     m1_c1.pause()
     m1_m2.resume()
     m1_m2.begin_reinit()
@@ -309,7 +309,7 @@ def test_csngen_state_not_updated_if_different_uuid(topo_m2c2):
     assert done is True
     assert error is False
 
-    # Step 10: Perform update on master2
+    # Step 10: Perform update on supplier2
     repl.test_replication(m2, c1)
 
     # Step 11: Check that c1 has time skew

+ 9 - 9
dirsrvtests/tests/suites/replication/regression_m3_test.py

@@ -43,11 +43,11 @@ def test_cleanallruv_repl(topo_m3):
     in deleted replica
 
     :id: 46faba9a-897e-45b8-98dc-aec7fa8cec9a
-    :setup: 3 Masters
+    :setup: 3 Suppliers
     :steps:
-        1. Configure error log level to 8192 in all masters
+        1. Configure error log level to 8192 in all suppliers
         2. Modify nsslapd-changelogmaxage=30 and nsslapd-changelogtrim-interval=5 for M1 and M2
-        3. Add test users to 3 masters
+        3. Add test users to 3 suppliers
         4. Launch ClearRuv but withForce
         5. Check the users after CleanRUV, because of changelog trimming, it will effect the CLs
     :expectedresults:
@@ -58,15 +58,15 @@ def test_cleanallruv_repl(topo_m3):
         5. Users should be present according to the changelog trimming effect
     """
 
-    M1 = topo_m3.ms["master1"]
-    M2 = topo_m3.ms["master2"]
-    M3 = topo_m3.ms["master3"]
+    M1 = topo_m3.ms["supplier1"]
+    M2 = topo_m3.ms["supplier2"]
+    M3 = topo_m3.ms["supplier3"]
 
-    log.info("Change the error log levels for all masters")
+    log.info("Change the error log levels for all suppliers")
     for s in (M1, M2, M3):
         s.config.replace('nsslapd-errorlog-level', "8192")
 
-    log.info("Get the replication agreements for all 3 masters")
+    log.info("Get the replication agreements for all 3 suppliers")
     m1_m2 = M1.agreement.list(suffix=SUFFIX, consumer_host=M2.host, consumer_port=M2.port)
     m1_m3 = M1.agreement.list(suffix=SUFFIX, consumer_host=M3.host, consumer_port=M3.port)
     m3_m1 = M3.agreement.list(suffix=SUFFIX, consumer_host=M1.host, consumer_port=M1.port)
@@ -94,7 +94,7 @@ def test_cleanallruv_repl(topo_m3):
         changelog_m1.set_max_age(MAXAGE_STR)
         changelog_m1.set_trim_interval(TRIMINTERVAL_STR)
 
-    log.info("Add test users to 3 masters")
+    log.info("Add test users to 3 suppliers")
     users_m1 = UserAccounts(M1, DEFAULT_SUFFIX)
     users_m2 = UserAccounts(M2, DEFAULT_SUFFIX)
     users_m3 = UserAccounts(M3, DEFAULT_SUFFIX)

+ 17 - 17
dirsrvtests/tests/suites/replication/repl_agmt_bootstrap_test.py

@@ -28,12 +28,12 @@ def test_repl_agmt_bootstrap_credentials(topo):
 
     :id: 38c8095c-d958-415a-b602-74854b7882b3
     :customerscenario: True
-    :setup: 2 Master Instances
+    :setup: 2 Supplier Instances
     :steps:
         1.  Change the bind dn group member passwords
         2.  Verify replication is not working
-        3.  Create a new repl manager on master 2 for bootstrapping
-        4.  Add bootstrap credentials to agmt on master 1
+        3.  Create a new repl manager on supplier 2 for bootstrapping
+        4.  Add bootstrap credentials to agmt on supplier 1
         5.  Verify replication is now working with bootstrap creds
         6.  Trigger new repl session and default credentials are used first
     :expectedresults:
@@ -46,13 +46,13 @@ def test_repl_agmt_bootstrap_credentials(topo):
     """
 
     # Gather all of our objects for the test
-    m1 = topo.ms["master1"]
-    m2 = topo.ms["master2"]
-    master1_replica = Replicas(m1).get(DEFAULT_SUFFIX)
-    master2_replica = Replicas(m2).get(DEFAULT_SUFFIX)
-    master2_users = UserAccounts(m2, DEFAULT_SUFFIX)
-    m1_agmt = master1_replica.get_agreements().list()[0]
-    num_of_original_users = len(master2_users.list())
+    m1 = topo.ms["supplier1"]
+    m2 = topo.ms["supplier2"]
+    supplier1_replica = Replicas(m1).get(DEFAULT_SUFFIX)
+    supplier2_replica = Replicas(m2).get(DEFAULT_SUFFIX)
+    supplier2_users = UserAccounts(m2, DEFAULT_SUFFIX)
+    m1_agmt = supplier1_replica.get_agreements().list()[0]
+    num_of_original_users = len(supplier2_users.list())
 
     # Change the member's passwords which should break replication
     bind_group = Group(m2, dn=BIND_GROUP_DN)
@@ -68,7 +68,7 @@ def test_repl_agmt_bootstrap_credentials(topo):
     users = UserAccounts(m1, DEFAULT_SUFFIX)
     test_user = users.ensure_state(properties=TEST_USER_PROPERTIES)
     time.sleep(3)
-    assert len(master2_users.list()) == num_of_original_users
+    assert len(supplier2_users.list()) == num_of_original_users
 
     # Create a repl manager on replica
     repl_mgr = BootstrapReplicationManager(m2, dn=BOOTSTRAP_MGR_DN)
@@ -79,12 +79,12 @@ def test_repl_agmt_bootstrap_credentials(topo):
     }
     repl_mgr.create(properties=mgr_properties)
 
-    # Update master 2 config
-    master2_replica.remove_all('nsDS5ReplicaBindDNGroup')
-    master2_replica.remove_all('nsDS5ReplicaBindDnGroupCheckInterval')
-    master2_replica.replace('nsDS5ReplicaBindDN', BOOTSTRAP_MGR_DN)
+    # Update supplier 2 config
+    supplier2_replica.remove_all('nsDS5ReplicaBindDNGroup')
+    supplier2_replica.remove_all('nsDS5ReplicaBindDnGroupCheckInterval')
+    supplier2_replica.replace('nsDS5ReplicaBindDN', BOOTSTRAP_MGR_DN)
 
-    # Add bootstrap credentials to master1 agmt, and restart agmt
+    # Add bootstrap credentials to supplier1 agmt, and restart agmt
     m1_agmt.replace('nsds5ReplicaBootstrapTransportInfo', 'LDAP')
     m1_agmt.replace('nsds5ReplicaBootstrapBindMethod', 'SIMPLE')
     m1_agmt.replace('nsds5ReplicaBootstrapCredentials', BOOTSTRAP_MGR_PWD)
@@ -94,7 +94,7 @@ def test_repl_agmt_bootstrap_credentials(topo):
 
     # Verify replication is working.  The user should have been replicated
     time.sleep(3)
-    assert len(master2_users.list()) > num_of_original_users
+    assert len(supplier2_users.list()) > num_of_original_users
 
     # Finally check if the default credentials are used on the next repl
     # session.  Clear out the logs, and disable log buffering.  Then

+ 13 - 13
dirsrvtests/tests/suites/replication/ruvstore_test.py

@@ -54,13 +54,13 @@ class MyLDIF(LDIFParser):
 def _perform_ldap_operations(topo):
     """Add a test user, modify description, modrdn user and delete it"""
 
-    users = UserAccounts(topo.ms['master1'], DEFAULT_SUFFIX)
-    log.info('Adding user to master1')
+    users = UserAccounts(topo.ms['supplier1'], DEFAULT_SUFFIX)
+    log.info('Adding user to supplier1')
     tuser = users.create(properties=USER_PROPERTIES)
     tuser.replace('description', 'newdesc')
     log.info('Modify RDN of user: {}'.format(tuser.dn))
     try:
-        topo.ms['master1'].modrdn_s(tuser.dn, 'uid={}'.format(NEW_RDN_NAME), 0)
+        topo.ms['supplier1'].modrdn_s(tuser.dn, 'uid={}'.format(NEW_RDN_NAME), 0)
     except ldap.LDAPError as e:
         log.fatal('Failed to modrdn entry: {}'.format(tuser.dn))
         raise e
@@ -73,7 +73,7 @@ def _compare_memoryruv_and_databaseruv(topo, operation_type):
     """Compare the memoryruv and databaseruv for ldap operations"""
 
     log.info('Checking memory ruv for ldap: {} operation'.format(operation_type))
-    replicas = Replicas(topo.ms['master1'])
+    replicas = Replicas(topo.ms['supplier1'])
     replica = replicas.list()[0]
     memory_ruv = replica.get_attr_val_utf8('nsds50ruv')
 
@@ -87,7 +87,7 @@ def test_ruv_entry_backup(topo):
     """Check if db2ldif stores the RUV details in the backup file
 
     :id: cbe2c473-8578-4caf-ac0a-841140e41e66
-    :setup: Replication with two masters.
+    :setup: Replication with two suppliers.
     :steps: 1. Add user to server.
             2. Perform ldap modify, modrdn and delete operations.
             3. Stop the server and backup the database using db2ldif task.
@@ -102,13 +102,13 @@ def test_ruv_entry_backup(topo):
     log.info('LDAP operations add, modify, modrdn and delete')
     _perform_ldap_operations(topo)
 
-    output_file = os.path.join(topo.ms['master1'].get_ldif_dir(), 'master1.ldif')
+    output_file = os.path.join(topo.ms['supplier1'].get_ldif_dir(), 'supplier1.ldif')
     log.info('Stopping the server instance to run db2ldif task to create backup file')
-    topo.ms['master1'].stop()
-    topo.ms['master1'].db2ldif(bename=DEFAULT_BENAME, suffixes=[DEFAULT_SUFFIX], excludeSuffixes=[],
+    topo.ms['supplier1'].stop()
+    topo.ms['supplier1'].db2ldif(bename=DEFAULT_BENAME, suffixes=[DEFAULT_SUFFIX], excludeSuffixes=[],
                                encrypt=False, repl_data=True, outputfile=output_file)
     log.info('Starting the server after backup')
-    topo.ms['master1'].start()
+    topo.ms['supplier1'].start()
 
     log.info('Checking if backup file contains RUV and required attributes')
     with open(output_file, 'r') as ldif_file:
@@ -121,7 +121,7 @@ def test_memoryruv_sync_with_databaseruv(topo):
     """Check if memory ruv and database ruv are synced
 
     :id: 5f38ac5f-6353-460d-bf60-49cafffda5b3
-    :setup: Replication with two masters.
+    :setup: Replication with two suppliers.
     :steps: 1. Add user to server and compare memory ruv and database ruv.
             2. Modify description of user and compare memory ruv and database ruv.
             3. Modrdn of user and compare memory ruv and database ruv.
@@ -133,8 +133,8 @@ def test_memoryruv_sync_with_databaseruv(topo):
             4. For delete operation, the memory ruv and database ruv should be the same.
     """
 
-    log.info('Adding user: {} to master1'.format(TEST_ENTRY_NAME))
-    users = UserAccounts(topo.ms['master1'], DEFAULT_SUFFIX)
+    log.info('Adding user: {} to supplier1'.format(TEST_ENTRY_NAME))
+    users = UserAccounts(topo.ms['supplier1'], DEFAULT_SUFFIX)
     tuser = users.create(properties=USER_PROPERTIES)
     _compare_memoryruv_and_databaseruv(topo, 'add')
 
@@ -144,7 +144,7 @@ def test_memoryruv_sync_with_databaseruv(topo):
 
     log.info('Modify RDN of user: {}'.format(tuser.dn))
     try:
-        topo.ms['master1'].modrdn_s(tuser.dn, 'uid={}'.format(NEW_RDN_NAME), 0)
+        topo.ms['supplier1'].modrdn_s(tuser.dn, 'uid={}'.format(NEW_RDN_NAME), 0)
     except ldap.LDAPError as e:
         log.fatal('Failed to modrdn entry: {}'.format(tuser.dn))
         raise e

+ 25 - 25
dirsrvtests/tests/suites/replication/series_of_repl_bugs_test.py

@@ -25,7 +25,7 @@ pytestmark = pytest.mark.tier1
 @pytest.fixture(scope="function")
 def _delete_after(request, topo_m2):
     def last():
-        m1 = topo_m2.ms["master1"]
+        m1 = topo_m2.ms["supplier1"]
         if UserAccounts(m1, DEFAULT_SUFFIX, rdn=None).list():
             for user in UserAccounts(m1, DEFAULT_SUFFIX, rdn=None).list():
                 user.delete()
@@ -38,7 +38,7 @@ def test_deletions_are_not_replicated(topo_m2):
     """usn + mmr = deletions are not replicated
 
     :id: aa4f67ce-a64c-11ea-a6fd-8c16451d917b
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. Enable USN plugin on both servers
         2. Enable USN plugin on Supplier 2
@@ -58,8 +58,8 @@ def test_deletions_are_not_replicated(topo_m2):
         7. Should succeeds
         8. Should succeeds
     """
-    m1 = topo_m2.ms["master1"]
-    m2 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier1"]
+    m2 = topo_m2.ms["supplier2"]
     # Enable USN plugin on both servers
     usn1 = USNPlugin(m1)
     usn2 = USNPlugin(m2)
@@ -91,7 +91,7 @@ def test_error_20(topo_m2, _delete_after):
     """DS returns error 20 when replacing values of a multi-valued attribute (only when replication is enabled)
 
     :id: a55bccc6-a64c-11ea-bac8-8c16451d917b
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. Add user
         2. Change multivalue attribute
@@ -99,8 +99,8 @@ def test_error_20(topo_m2, _delete_after):
         1. Should succeeds
         2. Should succeeds
     """
-    m1 = topo_m2.ms["master1"]
-    m2 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier1"]
+    m2 = topo_m2.ms["supplier2"]
     # Add user
     user = UserAccounts(m1, DEFAULT_SUFFIX, rdn=None).create_test_user(uid=1, gid=1)
     repl_manager = ReplicationManager(DEFAULT_SUFFIX)
@@ -114,7 +114,7 @@ def test_segfaults(topo_m2, _delete_after):
     """ns-slapd segfaults while trying to delete a tombstone entry
 
     :id: 9f8f7388-a64c-11ea-b5f7-8c16451d917b
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. Add new user
         2. Delete user - should leave tombstone entry
@@ -128,7 +128,7 @@ def test_segfaults(topo_m2, _delete_after):
         4. Should succeeds
         5. Should succeeds
     """
-    m1 = topo_m2.ms["master1"]
+    m1 = topo_m2.ms["supplier1"]
     # Add user
     user = UserAccounts(m1, DEFAULT_SUFFIX, rdn=None).create_test_user(uid=10, gid=1)
     # Delete user - should leave tombstone entry
@@ -148,7 +148,7 @@ def test_adding_deleting(topo_m2, _delete_after):
     """Adding attribute with 11 values to entry
 
     :id: 99842b1e-a64c-11ea-b8e3-8c16451d917b
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. Adding entry
         2. Adding attribute with 11 values to entry
@@ -158,7 +158,7 @@ def test_adding_deleting(topo_m2, _delete_after):
         2. Should succeeds
         3. Should succeeds
     """
-    m1 = topo_m2.ms["master1"]
+    m1 = topo_m2.ms["supplier1"]
     # Adding entry
     user = UserAccounts(m1, DEFAULT_SUFFIX, rdn=None).create_test_user(uid=1, gid=1)
     # Adding attribute with 11 values to entry
@@ -186,7 +186,7 @@ def test_deleting_twice(topo_m2):
     """Deleting entry twice crashed a server
 
     :id: 94045560-a64c-11ea-93d6-8c16451d917b
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. Adding entry
         2. Deleting the same entry from s1
@@ -196,8 +196,8 @@ def test_deleting_twice(topo_m2):
         2. Should succeeds
         3. Should succeeds
     """
-    m1 = topo_m2.ms["master1"]
-    m2 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier1"]
+    m2 = topo_m2.ms["supplier2"]
     # Adding entry
     user1 = UserAccounts(m1, DEFAULT_SUFFIX, rdn=None).create_test_user(uid=1, gid=1)
     repl_manager = ReplicationManager(DEFAULT_SUFFIX)
@@ -218,14 +218,14 @@ def test_rename_entry(topo_m2, _delete_after):
     """Rename entry crashed a server
 
     :id: 3866f9d6-a946-11ea-a3f8-8c16451d917b
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. Adding entry
         2. Stop Agreement for both
         3. Change description
-        4. Change will not reflect on other master
+        4. Change will not reflect on other supplier
         5. Turn on agreement on both
-        6. Change will reflect on other master
+        6. Change will reflect on other supplier
     :expected results:
         1. Should succeeds
         2. Should succeeds
@@ -234,8 +234,8 @@ def test_rename_entry(topo_m2, _delete_after):
         5. Should succeeds
         6. Should succeeds
     """
-    m1 = topo_m2.ms["master1"]
-    m2 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier1"]
+    m2 = topo_m2.ms["supplier2"]
     # Adding entry
     user1 = UserAccounts(m1, DEFAULT_SUFFIX, rdn=None).create_test_user(uid=1, gid=1)
     repl_manager = ReplicationManager(DEFAULT_SUFFIX)
@@ -250,7 +250,7 @@ def test_rename_entry(topo_m2, _delete_after):
     # change description
     user1.replace('description', 'New Des')
     assert user1.get_attr_val_utf8('description')
-    # Change will not reflect on other master
+    # Change will not reflect on other supplier
     with pytest.raises(AssertionError):
         assert user2.get_attr_val_utf8('description')
     # Turn on agreement on both
@@ -266,7 +266,7 @@ def test_userpassword_attribute(topo_m2, _delete_after):
         however a error message was displayed in the error logs which was curious.
 
     :id: bdcf0464-a947-11ea-9f0d-8c16451d917b
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. Add the test user to S1
         2. Check that user's  has been propogated to Supplier 2
@@ -278,8 +278,8 @@ def test_userpassword_attribute(topo_m2, _delete_after):
         3. Should succeeds
         4. Should succeeds
     """
-    m1 = topo_m2.ms["master1"]
-    m2 = topo_m2.ms["master2"]
+    m1 = topo_m2.ms["supplier1"]
+    m2 = topo_m2.ms["supplier2"]
     # Add the test user to S1
     user1 = UserAccounts(m1, DEFAULT_SUFFIX, rdn=None).create_test_user(uid=1, gid=1)
     repl_manager = ReplicationManager(DEFAULT_SUFFIX)
@@ -296,7 +296,7 @@ def test_userpassword_attribute(topo_m2, _delete_after):
 
 
 def _create_and_delete_tombstone(topo_m2, id):
-    m1 = topo_m2.ms["master1"]
+    m1 = topo_m2.ms["supplier1"]
     # Add new user
     user1 = UserAccounts(m1, DEFAULT_SUFFIX, rdn=None).create_test_user(uid=id, gid=id)
     # Delete user - should leave tombstone entry
@@ -313,7 +313,7 @@ def test_tombstone_modrdn(topo_m2):
     """rhds90 crash on tombstone modrdn
 
     :id: 846f5042-a948-11ea-ade2-8c16451d917b
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. Add new user
         2. Delete user - should leave tombstone entry

+ 14 - 14
dirsrvtests/tests/suites/replication/single_master_test.py

@@ -18,7 +18,7 @@ from lib389.backend import Backends
 from lib389.topologies import topology_m1c1 as topo_r # Replication
 from lib389.topologies import topology_i2 as topo_nr # No replication
 
-from lib389._constants import (ReplicaRole, DEFAULT_SUFFIX, REPLICAID_MASTER_1,
+from lib389._constants import (ReplicaRole, DEFAULT_SUFFIX, REPLICAID_SUPPLIER_1,
                                 REPLICATION_BIND_DN, REPLICATION_BIND_PW,
                                 REPLICATION_BIND_METHOD, REPLICATION_TRANSPORT, DEFAULT_BACKUPDIR,
                                 RA_NAME, RA_BINDDN, RA_BINDPW, RA_METHOD, RA_TRANSPORT_PROT,
@@ -39,8 +39,8 @@ def test_mail_attr_repl(topo_r):
 
     :id: 959edc84-05be-4bf9-a541-53afae482052
     :customerscenario: True
-    :setup: Replication setup with master and consumer instances,
-            test user on master
+    :setup: Replication setup with supplier and consumer instances,
+            test user on supplier
     :steps:
         1. Check that user was replicated to consumer
         2. Back up mail database file
@@ -57,16 +57,16 @@ def test_mail_attr_repl(topo_r):
         6. No crash should happen
     """
 
-    master = topo_r.ms["master1"]
+    supplier = topo_r.ms["supplier1"]
     consumer = topo_r.cs["consumer1"]
     repl = ReplicationManager(DEFAULT_SUFFIX)
 
-    m_users = UserAccounts(topo_r.ms["master1"], DEFAULT_SUFFIX)
+    m_users = UserAccounts(topo_r.ms["supplier1"], DEFAULT_SUFFIX)
     m_user = m_users.ensure_state(properties=TEST_USER_PROPERTIES)
     m_user.ensure_present('mail', '[email protected]')
 
     log.info("Check that replication is working")
-    repl.wait_for_replication(master, consumer)
+    repl.wait_for_replication(supplier, consumer)
     c_users = UserAccounts(topo_r.cs["consumer1"], DEFAULT_SUFFIX)
     c_user = c_users.get('testuser')
 
@@ -85,11 +85,11 @@ def test_mail_attr_repl(topo_r):
     shutil.copyfile(mail_db_path, backup_path)
     consumer.start()
 
-    log.info("Remove 'mail' attr from master")
+    log.info("Remove 'mail' attr from supplier")
     m_user.remove_all('mail')
 
     log.info("Wait for the replication to happen")
-    repl.wait_for_replication(master, consumer)
+    repl.wait_for_replication(supplier, consumer)
 
     consumer.stop()
     log.info("Restore {} to {}".format(backup_path, mail_db_path))
@@ -100,7 +100,7 @@ def test_mail_attr_repl(topo_r):
     c_user.get_attr_val("mail")
 
     log.info("Make sure that server hasn't crashed")
-    repl.test_replication(master, consumer)
+    repl.test_replication(supplier, consumer)
 
 
 def test_lastupdate_attr_before_init(topo_nr):
@@ -108,7 +108,7 @@ def test_lastupdate_attr_before_init(topo_nr):
 
     :id: bc8ce431-ff65-41f5-9331-605cbcaaa887
     :customerscenario: True
-    :setup: Replication setup with master and consumer instances
+    :setup: Replication setup with supplier and consumer instances
             without initialization
     :steps:
         1. Check nsds5replicaLastUpdateStart value
@@ -123,11 +123,11 @@ def test_lastupdate_attr_before_init(topo_nr):
         4. Success
     """
 
-    master = topo_nr.ins["standalone1"]
+    supplier = topo_nr.ins["standalone1"]
     consumer = topo_nr.ins["standalone2"]
 
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    repl.create_first_master(master)
+    repl.create_first_supplier(supplier)
 
     # Manually create an un-synced consumer.
 
@@ -140,9 +140,9 @@ def test_lastupdate_attr_before_init(topo_nr):
         'nsDS5ReplicaType': '2',
     })
 
-    agmt = repl.ensure_agreement(master, consumer)
+    agmt = repl.ensure_agreement(supplier, consumer)
     with pytest.raises(Exception):
-        repl.wait_for_replication(master, consumer, timeout=5)
+        repl.wait_for_replication(supplier, consumer, timeout=5)
 
     assert agmt.get_attr_val_utf8('nsds5replicaLastUpdateStart') == "19700101000000Z"
     assert agmt.get_attr_val_utf8("nsds5replicaLastUpdateEnd") == "19700101000000Z"

+ 10 - 10
dirsrvtests/tests/suites/replication/tls_client_auth_repl_test.py

@@ -28,12 +28,12 @@ log = logging.getLogger(__name__)
 
 @pytest.fixture(scope="module")
 def tls_client_auth(topo_m2):
-    """Enable TLS on both masters and reconfigure
+    """Enable TLS on both suppliers and reconfigure
     both agreements to use TLS Client auth
     """
 
-    m1 = topo_m2.ms['master1']
-    m2 = topo_m2.ms['master2']
+    m1 = topo_m2.ms['supplier1']
+    m2 = topo_m2.ms['supplier2']
 
     if ds_is_older('1.4.0.6'):
         transport = 'SSL'
@@ -96,7 +96,7 @@ def test_ssl_transport(tls_client_auth):
     """Test different combinations for nsDS5ReplicaTransportInfo values
 
     :id: 922d16f8-662a-4915-a39e-0aecd7c8e6e2
-    :setup: Two master replication, enabled TLS client auth
+    :setup: Two supplier replication, enabled TLS client auth
     :steps:
         1. Set nsDS5ReplicaTransportInfoCheck: SSL or StartTLS or TLS
         2. Restart the instance
@@ -109,8 +109,8 @@ def test_ssl_transport(tls_client_auth):
         4. Success
     """
 
-    m1 = tls_client_auth.ms['master1']
-    m2 = tls_client_auth.ms['master2']
+    m1 = tls_client_auth.ms['supplier1']
+    m2 = tls_client_auth.ms['supplier2']
     repl = ReplicationManager(DEFAULT_SUFFIX)
     replica_m1 = Replicas(m1).get(DEFAULT_SUFFIX)
     replica_m2 = Replicas(m2).get(DEFAULT_SUFFIX)
@@ -143,11 +143,11 @@ def test_ssl_transport(tls_client_auth):
 
 
 def test_extract_pemfiles(tls_client_auth):
-    """Test TLS client authentication between two masters operates
+    """Test TLS client authentication between two suppliers operates
     as expected with 'on' and 'off' options of nsslapd-extract-pemfiles
 
     :id: 922d16f8-662a-4915-a39e-0aecd7c8e6e1
-    :setup: Two master replication, enabled TLS client auth
+    :setup: Two supplier replication, enabled TLS client auth
     :steps:
         1. Check that nsslapd-extract-pemfiles default value is right
         2. Check that replication works with both 'on' and 'off' values
@@ -156,8 +156,8 @@ def test_extract_pemfiles(tls_client_auth):
         2. Replication works
     """
 
-    m1 = tls_client_auth.ms['master1']
-    m2 = tls_client_auth.ms['master2']
+    m1 = tls_client_auth.ms['supplier1']
+    m2 = tls_client_auth.ms['supplier2']
     repl = ReplicationManager(DEFAULT_SUFFIX)
 
     if ds_is_older('1.3.7'):

+ 3 - 3
dirsrvtests/tests/suites/replication/tombstone_fixup_test.py

@@ -14,7 +14,7 @@ from lib389.tombstone import Tombstones
 from lib389.idm.user import UserAccounts, TEST_USER_PROPERTIES
 from lib389.replica import ReplicationManager
 from lib389._constants import (defaultProperties, DEFAULT_SUFFIX, ReplicaRole,
-                               REPLICAID_MASTER_1, REPLICA_PRECISE_PURGING, REPLICA_PURGE_DELAY,
+                               REPLICAID_SUPPLIER_1, REPLICA_PRECISE_PURGING, REPLICA_PURGE_DELAY,
                                REPLICA_PURGE_INTERVAL)
 
 pytestmark = pytest.mark.tier2
@@ -24,7 +24,7 @@ def test_precise_tombstone_purging(topology_m1):
     """ Test precise tombstone purging
 
     :id: adb86f50-ae76-4ed6-82b4-3cdc30ccab79
-    :setup: master1 instance
+    :setup: supplier1 instance
     :steps:
         1. Create and Delete entry to create a tombstone
         2. export ldif, edit, and import ldif
@@ -41,7 +41,7 @@ def test_precise_tombstone_purging(topology_m1):
         6. Success
     """
     
-    m1 = topology_m1.ms['master1']
+    m1 = topology_m1.ms['supplier1']
     m1_tasks = Tasks(m1)
 
     # Create tombstone entry

+ 1 - 1
dirsrvtests/tests/suites/replication/tombstone_test.py

@@ -32,7 +32,7 @@ def test_purge_success(topology_m1):
         3. The entry should be successfully deleted
         4. Tombstone entry should exist
     """
-    m1 = topology_m1.ms['master1']
+    m1 = topology_m1.ms['supplier1']
 
     users = UserAccounts(m1, DEFAULT_SUFFIX)
     user = users.create(properties=TEST_USER_PROPERTIES)

+ 28 - 28
dirsrvtests/tests/suites/replication/wait_for_async_feature_test.py

@@ -35,10 +35,10 @@ def waitfor_async_attr(topology_m2, request):
     attr_value = request.param[0]
     expected_result = request.param[1]
 
-    # Run through all masters
+    # Run through all suppliers
 
-    for master in topology_m2.ms.values():
-        agmt = Agreements(master).list()[0]
+    for supplier in topology_m2.ms.values():
+        agmt = Agreements(supplier).list()[0]
 
         if attr_value:
             agmt.set_wait_for_async_results(attr_value)
@@ -54,14 +54,14 @@ def waitfor_async_attr(topology_m2, request):
 
 @pytest.fixture
 def entries(topology_m2, request):
-    """Adds entries to the master1"""
+    """Adds entries to the supplier1"""
 
-    master1 = topology_m2.ms["master1"]
+    supplier1 = topology_m2.ms["supplier1"]
 
     test_list = []
 
-    log.info("Add 100 nested entries under replicated suffix on %s" % master1.serverid)
-    ous = OrganizationalUnits(master1, DEFAULT_SUFFIX)
+    log.info("Add 100 nested entries under replicated suffix on %s" % supplier1.serverid)
+    ous = OrganizationalUnits(supplier1, DEFAULT_SUFFIX)
     for i in range(100):
         ou = ous.create(properties={
             'ou' : 'test_ou_%s' % i,
@@ -74,7 +74,7 @@ def entries(topology_m2, request):
 
     def fin():
         log.info("Clear the errors log in the end of the test case")
-        with open(master1.errlog, 'w') as errlog:
+        with open(supplier1.errlog, 'w') as errlog:
             errlog.writelines("")
 
     request.addfinalizer(fin)
@@ -84,15 +84,15 @@ def test_not_int_value(topology_m2):
     """Tests not integer value
 
     :id: 67c9994f-9251-425a-8197-8d12ad9beafc
-    :setup: Replication with two masters
+    :setup: Replication with two suppliers
     :steps:
         1. Try to set some string value
            to nsDS5ReplicaWaitForAsyncResults
     :expectedresults:
         1. Invalid syntax error should be raised
     """
-    master1 = topology_m2.ms["master1"]
-    agmt = Agreements(master1).list()[0]
+    supplier1 = topology_m2.ms["supplier1"]
+    agmt = Agreements(supplier1).list()[0]
 
     with pytest.raises(ldap.INVALID_SYNTAX):
         agmt.set_wait_for_async_results("ws2")
@@ -101,7 +101,7 @@ def test_multi_value(topology_m2):
     """Tests multi value
 
     :id: 1932301a-db29-407e-b27e-4466a876d1d3
-    :setup: Replication with two masters
+    :setup: Replication with two suppliers
     :steps:
         1. Set nsDS5ReplicaWaitForAsyncResults to some int
         2. Try to add one more int value
@@ -111,8 +111,8 @@ def test_multi_value(topology_m2):
         2. Object class violation error should be raised
     """
 
-    master1 = topology_m2.ms["master1"]
-    agmt = Agreements(master1).list()[0]
+    supplier1 = topology_m2.ms["supplier1"]
+    agmt = Agreements(supplier1).list()[0]
 
     agmt.set_wait_for_async_results('100')
     with pytest.raises(ldap.OBJECT_CLASS_VIOLATION):
@@ -123,12 +123,12 @@ def test_value_check(topology_m2, waitfor_async_attr):
 
     :id: 3e81afe9-5130-410d-a1bb-d798d8ab8519
     :parametrized: yes
-    :setup: Replication with two masters,
-        wait for async set on all masters, try:
+    :setup: Replication with two suppliers,
+        wait for async set on all suppliers, try:
         None, '2000', '0', '-5'
     :steps:
-        1. Search for nsDS5ReplicaWaitForAsyncResults on master 1
-        2. Search for nsDS5ReplicaWaitForAsyncResults on master 2
+        1. Search for nsDS5ReplicaWaitForAsyncResults on supplier 1
+        2. Search for nsDS5ReplicaWaitForAsyncResults on supplier 2
     :expectedresults:
         1. nsDS5ReplicaWaitForAsyncResults should be set correctly
         2. nsDS5ReplicaWaitForAsyncResults should be set correctly
@@ -136,8 +136,8 @@ def test_value_check(topology_m2, waitfor_async_attr):
 
     attr_value = waitfor_async_attr[0]
 
-    for master in topology_m2.ms.values():
-        agmt = Agreements(master).list()[0]
+    for supplier in topology_m2.ms.values():
+        agmt = Agreements(supplier).list()[0]
 
         server_value = agmt.get_wait_for_async_results_utf8()
         assert server_value == attr_value
@@ -148,12 +148,12 @@ def test_behavior_with_value(topology_m2, waitfor_async_attr, entries):
 
     :id: 117b6be2-cdab-422e-b0c7-3b88bbeec036
     :parametrized: yes
-    :setup: Replication with two masters,
-        wait for async set on all masters, try:
+    :setup: Replication with two suppliers,
+        wait for async set on all suppliers, try:
         None, '2000', '0', '-5'
     :steps:
         1. Set Replication Debugging loglevel for the errorlog
-        2. Set nsslapd-logging-hr-timestamps-enabled to 'off' on both masters
+        2. Set nsslapd-logging-hr-timestamps-enabled to 'off' on both suppliers
         3. Gather all sync attempts,  group by timestamp
         4. Take the most common timestamp and assert it has appeared
            in the set range
@@ -164,12 +164,12 @@ def test_behavior_with_value(topology_m2, waitfor_async_attr, entries):
         4. Errors log should have all timestamp appear
     """
 
-    master1 = topology_m2.ms["master1"]
-    master2 = topology_m2.ms["master2"]
+    supplier1 = topology_m2.ms["supplier1"]
+    supplier2 = topology_m2.ms["supplier2"]
 
     log.info("Set Replication Debugging loglevel for the errorlog")
-    master1.config.loglevel((ErrorLog.REPLICA,))
-    master2.config.loglevel((ErrorLog.REPLICA,))
+    supplier1.config.loglevel((ErrorLog.REPLICA,))
+    supplier2.config.loglevel((ErrorLog.REPLICA,))
 
     sync_dict = Counter()
     min_ap = waitfor_async_attr[1][0]
@@ -178,7 +178,7 @@ def test_behavior_with_value(topology_m2, waitfor_async_attr, entries):
     time.sleep(20)
 
     log.info("Gather all sync attempts within Counter dict, group by timestamp")
-    with open(master1.errlog, 'r') as errlog:
+    with open(supplier1.errlog, 'r') as errlog:
         errlog_filtered = filter(lambda x: "waitfor_async_results" in x, errlog)
 
         # Watch only over unsuccessful sync attempts

+ 1 - 1
dirsrvtests/tests/suites/rewriters/adfilter_test.py

@@ -125,7 +125,7 @@ def test_adfilter_objectSid(topology_st):
 
     topology_st.standalone.restart()
 
-    # Contains a list of b64encoded SID from https://github.com/SSSD/sssd/blob/master/src/tests/intg/data/ad_data.ldif
+    # Contains a list of b64encoded SID from https://github.com/SSSD/sssd/blob/supplier/src/tests/intg/data/ad_data.ldif
     SIDs = ["AQUAAAAAAAUVAAAADcfLTVzC66zo0l8EUAQAAA==",
             "AQUAAAAAAAUVAAAADcfLTVzC66zo0l8E9gEAAA==",
             "AQUAAAAAAAUVAAAADcfLTVzC66zo0l8EAwIAAA==",

+ 25 - 25
dirsrvtests/tests/suites/sasl/regression_test.py

@@ -89,16 +89,16 @@ def check_pems(confdir, mycacert, myservercert, myserverkey, notexist):
 
 
 def relocate_pem_files(topology_m2):
-    log.info("######################### Relocate PEM files on master1 ######################")
+    log.info("######################### Relocate PEM files on supplier1 ######################")
     certdir_prefix = "/dev/shm"
     mycacert = os.path.join(certdir_prefix, "MyCA")
-    topology_m2.ms["master1"].encryption.set('CACertExtractFile', mycacert)
+    topology_m2.ms["supplier1"].encryption.set('CACertExtractFile', mycacert)
     myservercert = os.path.join(certdir_prefix, "MyServerCert1")
     myserverkey = os.path.join(certdir_prefix, "MyServerKey1")
-    topology_m2.ms["master1"].rsa.apply_mods([(ldap.MOD_REPLACE, 'ServerCertExtractFile', myservercert),
+    topology_m2.ms["supplier1"].rsa.apply_mods([(ldap.MOD_REPLACE, 'ServerCertExtractFile', myservercert),
                                               (ldap.MOD_REPLACE, 'ServerKeyExtractFile', myserverkey)])
-    log.info("##### restart master1")
-    topology_m2.ms["master1"].restart()
+    log.info("##### restart supplier1")
+    topology_m2.ms["supplier1"].restart()
     check_pems(certdir_prefix, mycacert, myservercert, myserverkey, "")
 
 @pytest.mark.ds47536
@@ -107,19 +107,19 @@ def test_openldap_no_nss_crypto(topology_m2):
     that don't use NSS for crypto
 
     :id: 0a622f3d-8ba5-4df2-a1de-1fb2237da40a
-    :setup: Replication with two masters:
-        master_1 ----- startTLS -----> master_2;
-        master_1 <-- TLS_clientAuth -- master_2;
-        nsslapd-extract-pemfiles set to 'on' on both masters
+    :setup: Replication with two suppliers:
+        supplier_1 ----- startTLS -----> supplier_2;
+        supplier_1 <-- TLS_clientAuth -- supplier_2;
+        nsslapd-extract-pemfiles set to 'on' on both suppliers
         without specifying cert names
     :steps:
-        1. Add 5 users to master 1 and 2
+        1. Add 5 users to supplier 1 and 2
         2. Check that the users were successfully replicated
-        3. Relocate PEM files on master 1
-        4. Check PEM files in master 1 config directory
-        5. Add 5 users more to master 1 and 2
+        3. Relocate PEM files on supplier 1
+        4. Check PEM files in supplier 1 config directory
+        5. Add 5 users more to supplier 1 and 2
         6. Check that the users were successfully replicated
-        7. Export userRoot on master 1
+        7. Export userRoot on supplier 1
     :expectedresults:
         1. Users should be successfully added
         2. Users should be successfully replicated
@@ -132,42 +132,42 @@ def test_openldap_no_nss_crypto(topology_m2):
 
     log.info("Ticket 47536 - Allow usage of OpenLDAP libraries that don't use NSS for crypto")
 
-    m1 = topology_m2.ms["master1"]
-    m2 = topology_m2.ms["master2"]
+    m1 = topology_m2.ms["supplier1"]
+    m2 = topology_m2.ms["supplier2"]
     [i.enable_tls() for i in topology_m2]
     repl = ReplicationManager(DEFAULT_SUFFIX)
     repl.test_replication(m1, m2)
 
-    add_entry(m1, 'master1', 'uid=m1user', 0, 5)
-    add_entry(m2, 'master2', 'uid=m2user', 0, 5)
+    add_entry(m1, 'supplier1', 'uid=m1user', 0, 5)
+    add_entry(m2, 'supplier2', 'uid=m2user', 0, 5)
     repl.wait_for_replication(m1, m2)
     repl.wait_for_replication(m2, m1)
 
-    log.info('##### Searching for entries on master1...')
+    log.info('##### Searching for entries on supplier1...')
     entries = m1.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
     assert 11 == len(entries)
 
-    log.info('##### Searching for entries on master2...')
+    log.info('##### Searching for entries on supplier2...')
     entries = m2.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
     assert 11 == len(entries)
 
     relocate_pem_files(topology_m2)
 
-    add_entry(m1, 'master1', 'uid=m1user', 10, 5)
-    add_entry(m2, 'master2', 'uid=m2user', 10, 5)
+    add_entry(m1, 'supplier1', 'uid=m1user', 10, 5)
+    add_entry(m2, 'supplier2', 'uid=m2user', 10, 5)
 
     repl.wait_for_replication(m1, m2)
     repl.wait_for_replication(m2, m1)
 
-    log.info('##### Searching for entries on master1...')
+    log.info('##### Searching for entries on supplier1...')
     entries = m1.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
     assert 21 == len(entries)
 
-    log.info('##### Searching for entries on master2...')
+    log.info('##### Searching for entries on supplier2...')
     entries = m2.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
     assert 21 == len(entries)
 
-    output_file = os.path.join(m1.get_ldif_dir(), "master1.ldif")
+    output_file = os.path.join(m1.get_ldif_dir(), "supplier1.ldif")
     m1.tasks.exportLDIF(benamebase='userRoot', output_file=output_file, args={'wait': True})
 
     log.info("Ticket 47536 - PASSED")

+ 115 - 115
dirsrvtests/tests/suites/schema/schema_replication_test.py

@@ -38,11 +38,11 @@ MAY_NEW = "(postalCode $ street $ postOfficeBox)"
 
 
 def _header(topology_m1c1, label):
-    topology_m1c1.ms["master1"].log.info("\n\n###############################################")
-    topology_m1c1.ms["master1"].log.info("#######")
-    topology_m1c1.ms["master1"].log.info("####### %s" % label)
-    topology_m1c1.ms["master1"].log.info("#######")
-    topology_m1c1.ms["master1"].log.info("###################################################")
+    topology_m1c1.ms["supplier1"].log.info("\n\n###############################################")
+    topology_m1c1.ms["supplier1"].log.info("#######")
+    topology_m1c1.ms["supplier1"].log.info("####### %s" % label)
+    topology_m1c1.ms["supplier1"].log.info("#######")
+    topology_m1c1.ms["supplier1"].log.info("###################################################")
 
 
 def pattern_errorlog(file, log_pattern):
@@ -99,10 +99,10 @@ def support_schema_learning(topology_m1c1):
     with https://fedorahosted.org/389/ticket/47721, the supplier and consumer can learn
     schema definitions when a replication occurs.
     Before that ticket: replication of the schema fails requiring administrative operation
-    In the test the schemaCSN (master consumer) differs
+    In the test the schemaCSN (supplier consumer) differs
 
     After that ticket: replication of the schema succeeds (after an initial phase of learning)
-    In the test the schema CSN (master consumer) are in sync
+    In the test the schema CSN (supplier consumer) are in sync
 
     This function returns True if 47721 is fixed in the current release
     False else
@@ -135,7 +135,7 @@ def trigger_update(topology_m1c1):
     except AttributeError:
         trigger_update.value = 1
     replace = [(ldap.MOD_REPLACE, 'telephonenumber', ensure_bytes(str(trigger_update.value)))]
-    topology_m1c1.ms["master1"].modify_s(ENTRY_DN, replace)
+    topology_m1c1.ms["supplier1"].modify_s(ENTRY_DN, replace)
 
     # wait 10 seconds that the update is replicated
     loop = 0
@@ -163,14 +163,14 @@ def trigger_schema_push(topology_m1c1):
     push the schema (and the schemaCSN.
     This is why there is two updates and replica agreement is stopped/start (to create a second session)
     '''
-    agreements = topology_m1c1.ms["master1"].agreement.list(suffix=SUFFIX,
+    agreements = topology_m1c1.ms["supplier1"].agreement.list(suffix=SUFFIX,
                                                             consumer_host=topology_m1c1.cs["consumer1"].host,
                                                             consumer_port=topology_m1c1.cs["consumer1"].port)
     assert (len(agreements) == 1)
     ra = agreements[0]
     trigger_update(topology_m1c1)
-    topology_m1c1.ms["master1"].agreement.pause(ra.dn)
-    topology_m1c1.ms["master1"].agreement.resume(ra.dn)
+    topology_m1c1.ms["supplier1"].agreement.pause(ra.dn)
+    topology_m1c1.ms["supplier1"].agreement.resume(ra.dn)
     trigger_update(topology_m1c1)
 
 
@@ -179,14 +179,14 @@ def schema_replication_init(topology_m1c1):
     """Initialize the test environment
 
     """
-    log.debug("test_schema_replication_init topology_m1c1 %r (master %r, consumer %r" % (
-    topology_m1c1, topology_m1c1.ms["master1"], topology_m1c1.cs["consumer1"]))
+    log.debug("test_schema_replication_init topology_m1c1 %r (supplier %r, consumer %r" % (
+    topology_m1c1, topology_m1c1.ms["supplier1"], topology_m1c1.cs["consumer1"]))
     # check if a warning message is logged in the
     # error log of the supplier
-    topology_m1c1.ms["master1"].errorlog_file = open(topology_m1c1.ms["master1"].errlog, "r")
+    topology_m1c1.ms["supplier1"].errorlog_file = open(topology_m1c1.ms["supplier1"].errlog, "r")
 
     # This entry will be used to trigger attempt of schema push
-    topology_m1c1.ms["master1"].add_s(Entry((ENTRY_DN, {
+    topology_m1c1.ms["supplier1"].add_s(Entry((ENTRY_DN, {
         'objectclass': "top person".split(),
         'sn': 'test_entry',
         'cn': 'test_entry'})))
@@ -198,12 +198,12 @@ def test_schema_replication_one(topology_m1c1, schema_replication_init):
     schema is pushed and there is no message in the error log
 
     :id: d6c6ff30-b3ae-4001-80ff-0fb18563a393
-    :setup: Master Consumer, check if a warning message is logged in the
+    :setup: Supplier Consumer, check if a warning message is logged in the
             error log of the supplier and add a test entry to trigger attempt of schema push.
     :steps:
         1. Update the schema of supplier, so it will be superset of consumer
         2. Push the Schema (no error)
-        3. Check both master and consumer has same schemaCSN
+        3. Check both supplier and consumer has same schemaCSN
         4. Check the startup/final state
     :expectedresults:
         1. Operation should be successful
@@ -213,30 +213,30 @@ def test_schema_replication_one(topology_m1c1, schema_replication_init):
             - supplier default schema
             - consumer default schema
            Final state
-            - supplier +masterNewOCA
-            - consumer +masterNewOCA
+            - supplier +supplierNewOCA
+            - consumer +supplierNewOCA
     """
 
     _header(topology_m1c1, "Extra OC Schema is pushed - no error")
 
-    log.debug("test_schema_replication_one topology_m1c1 %r (master %r, consumer %r" % (
-    topology_m1c1, topology_m1c1.ms["master1"], topology_m1c1.cs["consumer1"]))
+    log.debug("test_schema_replication_one topology_m1c1 %r (supplier %r, consumer %r" % (
+    topology_m1c1, topology_m1c1.ms["supplier1"], topology_m1c1.cs["consumer1"]))
     # update the schema of the supplier so that it is a superset of
     # consumer. Schema should be pushed
-    add_OC(topology_m1c1.ms["master1"], 2, 'masterNewOCA')
+    add_OC(topology_m1c1.ms["supplier1"], 2, 'supplierNewOCA')
 
     trigger_schema_push(topology_m1c1)
-    master_schema_csn = topology_m1c1.ms["master1"].schema.get_schema_csn()
+    supplier_schema_csn = topology_m1c1.ms["supplier1"].schema.get_schema_csn()
     consumer_schema_csn = topology_m1c1.cs["consumer1"].schema.get_schema_csn()
 
     # Check the schemaCSN was updated on the consumer
-    log.debug("test_schema_replication_one master_schema_csn=%s", master_schema_csn)
+    log.debug("test_schema_replication_one supplier_schema_csn=%s", supplier_schema_csn)
     log.debug("ctest_schema_replication_one onsumer_schema_csn=%s", consumer_schema_csn)
-    assert master_schema_csn == consumer_schema_csn
+    assert supplier_schema_csn == consumer_schema_csn
 
     # Check the error log of the supplier does not contain an error
     regex = re.compile(r"must not be overwritten \(set replication log for additional info\)")
-    res = pattern_errorlog(topology_m1c1.ms["master1"].errorlog_file, regex)
+    res = pattern_errorlog(topology_m1c1.ms["supplier1"].errorlog_file, regex)
     if res is not None:
         assert False
 
@@ -247,7 +247,7 @@ def test_schema_replication_two(topology_m1c1, schema_replication_init):
         schema is pushed and there is a message in the error log
 
     :id: b5db9b75-a9a7-458e-86ec-2a8e7bd1c014
-    :setup: Master Consumer, check if a warning message is logged in the
+    :setup: Supplier Consumer, check if a warning message is logged in the
             error log of the supplier and add a test entry to trigger attempt of schema push.
     :steps:
         1. Update the schema of consumer, so it will be superset of supplier
@@ -263,11 +263,11 @@ def test_schema_replication_two(topology_m1c1, schema_replication_init):
         4. Operation should be successful
         5. Operation should be successful
         6. State at startup
-            - supplier +masterNewOCA
-            - consumer +masterNewOCA
+            - supplier +supplierNewOCA
+            - consumer +supplierNewOCA
            Final state
-            - supplier +masterNewOCA +masterNewOCB
-            - consumer +masterNewOCA               +consumerNewOCA
+            - supplier +supplierNewOCA +supplierNewOCB
+            - consumer +supplierNewOCA               +consumerNewOCA
     """
 
     _header(topology_m1c1, "Extra OC Schema is pushed - (ticket 47721 allows to learn missing def)")
@@ -277,26 +277,26 @@ def test_schema_replication_two(topology_m1c1, schema_replication_init):
 
     # add a new OC on the supplier so that its nsSchemaCSN is larger than the consumer (wait 2s)
     time.sleep(2)
-    add_OC(topology_m1c1.ms["master1"], 3, 'masterNewOCB')
+    add_OC(topology_m1c1.ms["supplier1"], 3, 'supplierNewOCB')
 
     # now push the scheam
     trigger_schema_push(topology_m1c1)
-    master_schema_csn = topology_m1c1.ms["master1"].schema.get_schema_csn()
+    supplier_schema_csn = topology_m1c1.ms["supplier1"].schema.get_schema_csn()
     consumer_schema_csn = topology_m1c1.cs["consumer1"].schema.get_schema_csn()
 
     # Check the schemaCSN was NOT updated on the consumer
     # with 47721, supplier learns the missing definition
-    log.debug("test_schema_replication_two master_schema_csn=%s", master_schema_csn)
+    log.debug("test_schema_replication_two supplier_schema_csn=%s", supplier_schema_csn)
     log.debug("test_schema_replication_two consumer_schema_csn=%s", consumer_schema_csn)
     if support_schema_learning(topology_m1c1):
-        assert master_schema_csn == consumer_schema_csn
+        assert supplier_schema_csn == consumer_schema_csn
     else:
-        assert master_schema_csn != consumer_schema_csn
+        assert supplier_schema_csn != consumer_schema_csn
 
     # Check the error log of the supplier does not contain an error
     # This message may happen during the learning phase
     regex = re.compile(r"must not be overwritten \(set replication log for additional info\)")
-    res = pattern_errorlog(topology_m1c1.ms["master1"].errorlog_file, regex)
+    res = pattern_errorlog(topology_m1c1.ms["supplier1"].errorlog_file, regex)
 
 
 @pytest.mark.ds47490
@@ -305,10 +305,10 @@ def test_schema_replication_three(topology_m1c1, schema_replication_init):
     schema is pushed and there is no message in the error log
 
     :id: 45888895-76bc-4cc3-9f90-33a69d027116
-    :setup: Master Consumer, check if a warning message is logged in the
+    :setup: Supplier Consumer, check if a warning message is logged in the
             error log of the supplier and add a test entry to trigger attempt of schema push.
     :steps:
-        1. Update the schema of master
+        1. Update the schema of supplier
         2. Push the Schema (no error)
         3. Check the schemaCSN was NOT updated on the consumer
         4. Check the error logs for no errors
@@ -319,31 +319,31 @@ def test_schema_replication_three(topology_m1c1, schema_replication_init):
         3. Operation should be successful
         4. Operation should be successful
         5. State at startup
-            - supplier +masterNewOCA +masterNewOCB
-            - consumer +masterNewOCA               +consumerNewOCA
+            - supplier +supplierNewOCA +supplierNewOCB
+            - consumer +supplierNewOCA               +consumerNewOCA
            Final state
-            - supplier +masterNewOCA +masterNewOCB +consumerNewOCA
-            - consumer +masterNewOCA +masterNewOCB +consumerNewOCA
+            - supplier +supplierNewOCA +supplierNewOCB +consumerNewOCA
+            - consumer +supplierNewOCA +supplierNewOCB +consumerNewOCA
     """
     _header(topology_m1c1, "Extra OC Schema is pushed - no error")
 
     # Do an upate to trigger the schema push attempt
     # add this OC on consumer. Supplier will no push the schema
-    add_OC(topology_m1c1.ms["master1"], 1, 'consumerNewOCA')
+    add_OC(topology_m1c1.ms["supplier1"], 1, 'consumerNewOCA')
 
     # now push the scheam
     trigger_schema_push(topology_m1c1)
-    master_schema_csn = topology_m1c1.ms["master1"].schema.get_schema_csn()
+    supplier_schema_csn = topology_m1c1.ms["supplier1"].schema.get_schema_csn()
     consumer_schema_csn = topology_m1c1.cs["consumer1"].schema.get_schema_csn()
 
     # Check the schemaCSN was NOT updated on the consumer
-    log.debug("test_schema_replication_three master_schema_csn=%s", master_schema_csn)
+    log.debug("test_schema_replication_three supplier_schema_csn=%s", supplier_schema_csn)
     log.debug("test_schema_replication_three consumer_schema_csn=%s", consumer_schema_csn)
-    assert master_schema_csn == consumer_schema_csn
+    assert supplier_schema_csn == consumer_schema_csn
 
     # Check the error log of the supplier does not contain an error
     regex = re.compile(r"must not be overwritten \(set replication log for additional info\)")
-    res = pattern_errorlog(topology_m1c1.ms["master1"].errorlog_file, regex)
+    res = pattern_errorlog(topology_m1c1.ms["supplier1"].errorlog_file, regex)
     if res is not None:
         assert False
 
@@ -354,10 +354,10 @@ def test_schema_replication_four(topology_m1c1, schema_replication_init):
     schema is pushed and there is no message in the error log
 
     :id: 39304242-2641-4eb8-a9fb-5ff0cf80718f
-    :setup: Master Consumer, check if a warning message is logged in the
+    :setup: Supplier Consumer, check if a warning message is logged in the
             error log of the supplier and add a test entry to trigger attempt of schema push.
     :steps:
-        1. Add telenumber to 'masterNewOCA' on the master
+        1. Add telenumber to 'supplierNewOCA' on the supplier
         2. Push the Schema (no error)
         3. Check the schemaCSN was updated on the consumer
         4. Check the error log of the supplier does not contain an error
@@ -368,31 +368,31 @@ def test_schema_replication_four(topology_m1c1, schema_replication_init):
         3. Operation should be successful
         4. Operation should be successful
         5. State at startup
-            - supplier +masterNewOCA +masterNewOCB +consumerNewOCA
-            - consumer +masterNewOCA +masterNewOCB +consumerNewOCA
+            - supplier +supplierNewOCA +supplierNewOCB +consumerNewOCA
+            - consumer +supplierNewOCA +supplierNewOCB +consumerNewOCA
            Final state
-            - supplier +masterNewOCA     +masterNewOCB     +consumerNewOCA
+            - supplier +supplierNewOCA     +supplierNewOCB     +consumerNewOCA
                        +must=telexnumber
-            - consumer +masterNewOCA     +masterNewOCB     +consumerNewOCA
+            - consumer +supplierNewOCA     +supplierNewOCB     +consumerNewOCA
                        +must=telexnumber
     """
     _header(topology_m1c1, "Same OC - extra MUST: Schema is pushed - no error")
 
-    mod_OC(topology_m1c1.ms["master1"], 2, 'masterNewOCA', old_must=MUST_OLD, new_must=MUST_NEW, old_may=MAY_OLD,
+    mod_OC(topology_m1c1.ms["supplier1"], 2, 'supplierNewOCA', old_must=MUST_OLD, new_must=MUST_NEW, old_may=MAY_OLD,
            new_may=MAY_OLD)
 
     trigger_schema_push(topology_m1c1)
-    master_schema_csn = topology_m1c1.ms["master1"].schema.get_schema_csn()
+    supplier_schema_csn = topology_m1c1.ms["supplier1"].schema.get_schema_csn()
     consumer_schema_csn = topology_m1c1.cs["consumer1"].schema.get_schema_csn()
 
     # Check the schemaCSN was updated on the consumer
-    log.debug("test_schema_replication_four master_schema_csn=%s", master_schema_csn)
+    log.debug("test_schema_replication_four supplier_schema_csn=%s", supplier_schema_csn)
     log.debug("ctest_schema_replication_four onsumer_schema_csn=%s", consumer_schema_csn)
-    assert master_schema_csn == consumer_schema_csn
+    assert supplier_schema_csn == consumer_schema_csn
 
     # Check the error log of the supplier does not contain an error
     regex = re.compile(r"must not be overwritten \(set replication log for additional info\)")
-    res = pattern_errorlog(topology_m1c1.ms["master1"].errorlog_file, regex)
+    res = pattern_errorlog(topology_m1c1.ms["supplier1"].errorlog_file, regex)
     if res is not None:
         assert False
 
@@ -403,7 +403,7 @@ def test_schema_replication_five(topology_m1c1, schema_replication_init):
     schema is  pushed (fix for 47721) and there is a message in the error log
 
     :id: 498527df-28c8-4e1a-bc9e-799fd2b7b2bb
-    :setup: Master Consumer, check if a warning message is logged in the
+    :setup: Supplier Consumer, check if a warning message is logged in the
             error log of the supplier and add a test entry to trigger attempt of schema push.
     :steps:
         1. Add telenumber to 'consumerNewOCA' on the consumer
@@ -419,14 +419,14 @@ def test_schema_replication_five(topology_m1c1, schema_replication_init):
         4. Operation should be successful
         5. Operation should be successful
         6. State at startup
-            - supplier +masterNewOCA     +masterNewOCB     +consumerNewOCA
+            - supplier +supplierNewOCA     +supplierNewOCB     +consumerNewOCA
                        +must=telexnumber
-            - consumer +masterNewOCA     +masterNewOCB     +consumerNewOCA
+            - consumer +supplierNewOCA     +supplierNewOCB     +consumerNewOCA
                         +must=telexnumber
            Final state
-            - supplier +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - supplier +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber
-            - consumer +masterNewOCA     +masterNewOCB     +consumerNewOCA
+            - consumer +supplierNewOCA     +supplierNewOCB     +consumerNewOCA
                        +must=telexnumber                   +must=telexnumber
 
            Note: replication log is enabled to get more details
@@ -434,32 +434,32 @@ def test_schema_replication_five(topology_m1c1, schema_replication_init):
     _header(topology_m1c1, "Same OC - extra MUST: Schema is pushed - (fix for 47721)")
 
     # get more detail why it fails
-    topology_m1c1.ms["master1"].enableReplLogging()
+    topology_m1c1.ms["supplier1"].enableReplLogging()
 
     # add telenumber to 'consumerNewOCA' on the consumer
     mod_OC(topology_m1c1.cs["consumer1"], 1, 'consumerNewOCA', old_must=MUST_OLD, new_must=MUST_NEW, old_may=MAY_OLD,
            new_may=MAY_OLD)
     # add a new OC on the supplier so that its nsSchemaCSN is larger than the consumer (wait 2s)
     time.sleep(2)
-    add_OC(topology_m1c1.ms["master1"], 4, 'masterNewOCC')
+    add_OC(topology_m1c1.ms["supplier1"], 4, 'supplierNewOCC')
 
     trigger_schema_push(topology_m1c1)
-    master_schema_csn = topology_m1c1.ms["master1"].schema.get_schema_csn()
+    supplier_schema_csn = topology_m1c1.ms["supplier1"].schema.get_schema_csn()
     consumer_schema_csn = topology_m1c1.cs["consumer1"].schema.get_schema_csn()
 
     # Check the schemaCSN was NOT updated on the consumer
     # with 47721, supplier learns the missing definition
-    log.debug("test_schema_replication_five master_schema_csn=%s", master_schema_csn)
+    log.debug("test_schema_replication_five supplier_schema_csn=%s", supplier_schema_csn)
     log.debug("ctest_schema_replication_five onsumer_schema_csn=%s", consumer_schema_csn)
     if support_schema_learning(topology_m1c1):
-        assert master_schema_csn == consumer_schema_csn
+        assert supplier_schema_csn == consumer_schema_csn
     else:
-        assert master_schema_csn != consumer_schema_csn
+        assert supplier_schema_csn != consumer_schema_csn
 
     # Check the error log of the supplier does not contain an error
     # This message may happen during the learning phase
     regex = re.compile(r"must not be overwritten \(set replication log for additional info\)")
-    res = pattern_errorlog(topology_m1c1.ms["master1"].errorlog_file, regex)
+    res = pattern_errorlog(topology_m1c1.ms["supplier1"].errorlog_file, regex)
 
 
 @pytest.mark.ds47490
@@ -468,10 +468,10 @@ def test_schema_replication_six(topology_m1c1, schema_replication_init):
     schema is pushed and there is no message in the error log
 
     :id: ed57b0cc-6a10-4f89-94ae-9f18542b1954
-    :setup: Master Consumer, check if a warning message is logged in the
+    :setup: Supplier Consumer, check if a warning message is logged in the
             error log of the supplier and add a test entry to trigger attempt of schema push.
     :steps:
-        1. Add telenumber to 'consumerNewOCA' on the master
+        1. Add telenumber to 'consumerNewOCA' on the supplier
         2. Push the Schema (no error)
         3. Check the schemaCSN was NOT updated on the consumer
         4. Check the error log of the supplier does not contain an error
@@ -482,14 +482,14 @@ def test_schema_replication_six(topology_m1c1, schema_replication_init):
         3. Operation should be successful
         4. Operation should be successful
         5. State at startup
-            - supplier +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - supplier +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber
-            - consumer +masterNewOCA     +masterNewOCB     +consumerNewOCA
+            - consumer +supplierNewOCA     +supplierNewOCB     +consumerNewOCA
                        +must=telexnumber                   +must=telexnumber
            Final state
-            - supplier +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - supplier +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
-            - consumer +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - consumer +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
     
            Note: replication log is enabled to get more details
@@ -497,22 +497,22 @@ def test_schema_replication_six(topology_m1c1, schema_replication_init):
     _header(topology_m1c1, "Same OC - extra MUST: Schema is pushed - no error")
 
     # add telenumber to 'consumerNewOCA' on the consumer
-    mod_OC(topology_m1c1.ms["master1"], 1, 'consumerNewOCA', old_must=MUST_OLD, new_must=MUST_NEW, old_may=MAY_OLD,
+    mod_OC(topology_m1c1.ms["supplier1"], 1, 'consumerNewOCA', old_must=MUST_OLD, new_must=MUST_NEW, old_may=MAY_OLD,
            new_may=MAY_OLD)
 
     trigger_schema_push(topology_m1c1)
-    master_schema_csn = topology_m1c1.ms["master1"].schema.get_schema_csn()
+    supplier_schema_csn = topology_m1c1.ms["supplier1"].schema.get_schema_csn()
     consumer_schema_csn = topology_m1c1.cs["consumer1"].schema.get_schema_csn()
 
     # Check the schemaCSN was NOT updated on the consumer
-    log.debug("test_schema_replication_six master_schema_csn=%s", master_schema_csn)
+    log.debug("test_schema_replication_six supplier_schema_csn=%s", supplier_schema_csn)
     log.debug("ctest_schema_replication_six onsumer_schema_csn=%s", consumer_schema_csn)
-    assert master_schema_csn == consumer_schema_csn
+    assert supplier_schema_csn == consumer_schema_csn
 
     # Check the error log of the supplier does not contain an error
     # This message may happen during the learning phase
     regex = re.compile(r"must not be overwritten \(set replication log for additional info\)")
-    res = pattern_errorlog(topology_m1c1.ms["master1"].errorlog_file, regex)
+    res = pattern_errorlog(topology_m1c1.ms["supplier1"].errorlog_file, regex)
     if res is not None:
         assert False
 
@@ -523,10 +523,10 @@ def test_schema_replication_seven(topology_m1c1, schema_replication_init):
     schema is pushed and there is no message in the error log
 
     :id: 8725055a-b3f8-4d1d-a4d6-bb7dccf644d0
-    :setup: Master Consumer, check if a warning message is logged in the
+    :setup: Supplier Consumer, check if a warning message is logged in the
             error log of the supplier and add a test entry to trigger attempt of schema push.
     :steps:
-        1. Add telenumber to 'masterNewOCA' on the master
+        1. Add telenumber to 'supplierNewOCA' on the supplier
         2. Push the Schema (no error)
         3. Check the schemaCSN was updated on the consumer
         4. Check the error log of the supplier does not contain an error
@@ -537,35 +537,35 @@ def test_schema_replication_seven(topology_m1c1, schema_replication_init):
         3. Operation should be successful
         4. Operation should be successful
         5. State at startup
-            - supplier +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - supplier +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
-            - consumer +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - consumer +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
            Final state
-            - supplier +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - supplier +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
                        +may=postOfficeBox
-            - consumer +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - consumer +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
                        +may=postOfficeBox
     """
     _header(topology_m1c1, "Same OC - extra MAY: Schema is pushed - no error")
 
-    mod_OC(topology_m1c1.ms["master1"], 2, 'masterNewOCA', old_must=MUST_NEW, new_must=MUST_NEW, old_may=MAY_OLD,
+    mod_OC(topology_m1c1.ms["supplier1"], 2, 'supplierNewOCA', old_must=MUST_NEW, new_must=MUST_NEW, old_may=MAY_OLD,
            new_may=MAY_NEW)
 
     trigger_schema_push(topology_m1c1)
-    master_schema_csn = topology_m1c1.ms["master1"].schema.get_schema_csn()
+    supplier_schema_csn = topology_m1c1.ms["supplier1"].schema.get_schema_csn()
     consumer_schema_csn = topology_m1c1.cs["consumer1"].schema.get_schema_csn()
 
     # Check the schemaCSN was updated on the consumer
-    log.debug("test_schema_replication_seven master_schema_csn=%s", master_schema_csn)
+    log.debug("test_schema_replication_seven supplier_schema_csn=%s", supplier_schema_csn)
     log.debug("ctest_schema_replication_seven consumer_schema_csn=%s", consumer_schema_csn)
-    assert master_schema_csn == consumer_schema_csn
+    assert supplier_schema_csn == consumer_schema_csn
 
     # Check the error log of the supplier does not contain an error
     regex = re.compile(r"must not be overwritten \(set replication log for additional info\)")
-    res = pattern_errorlog(topology_m1c1.ms["master1"].errorlog_file, regex)
+    res = pattern_errorlog(topology_m1c1.ms["supplier1"].errorlog_file, regex)
     if res is not None:
         assert False
 
@@ -576,7 +576,7 @@ def test_schema_replication_eight(topology_m1c1, schema_replication_init):
     schema is  pushed (fix for 47721) and there is  message in the error log
 
     :id: 2310d150-a71a-498d-add8-4056beeb58c6
-    :setup: Master Consumer, check if a warning message is logged in the
+    :setup: Supplier Consumer, check if a warning message is logged in the
             error log of the supplier and add a test entry to trigger attempt of schema push.
     :steps:
         1. Add telenumber to 'consumerNewOCA' on the consumer
@@ -592,17 +592,17 @@ def test_schema_replication_eight(topology_m1c1, schema_replication_init):
         4. Operation should be successful
         5. Operation should be successful
         6. State at startup
-            - supplier +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - supplier +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
                        +may=postOfficeBox
-            - consumer +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - consumer +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
                        +may=postOfficeBox
            Final state
-            - supplier +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - supplier +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
                        +may=postOfficeBox                                     +may=postOfficeBox
-            - consumer +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - consumer +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
                        +may=postOfficeBox                  +may=postOfficeBox
     """
@@ -613,26 +613,26 @@ def test_schema_replication_eight(topology_m1c1, schema_replication_init):
 
     # modify OC on the supplier so that its nsSchemaCSN is larger than the consumer (wait 2s)
     time.sleep(2)
-    mod_OC(topology_m1c1.ms["master1"], 4, 'masterNewOCC', old_must=MUST_OLD, new_must=MUST_OLD, old_may=MAY_OLD,
+    mod_OC(topology_m1c1.ms["supplier1"], 4, 'supplierNewOCC', old_must=MUST_OLD, new_must=MUST_OLD, old_may=MAY_OLD,
            new_may=MAY_NEW)
 
     trigger_schema_push(topology_m1c1)
-    master_schema_csn = topology_m1c1.ms["master1"].schema.get_schema_csn()
+    supplier_schema_csn = topology_m1c1.ms["supplier1"].schema.get_schema_csn()
     consumer_schema_csn = topology_m1c1.cs["consumer1"].schema.get_schema_csn()
 
     # Check the schemaCSN was not updated on the consumer
     # with 47721, supplier learns the missing definition
-    log.debug("test_schema_replication_eight master_schema_csn=%s", master_schema_csn)
+    log.debug("test_schema_replication_eight supplier_schema_csn=%s", supplier_schema_csn)
     log.debug("ctest_schema_replication_eight onsumer_schema_csn=%s", consumer_schema_csn)
     if support_schema_learning(topology_m1c1):
-        assert master_schema_csn == consumer_schema_csn
+        assert supplier_schema_csn == consumer_schema_csn
     else:
-        assert master_schema_csn != consumer_schema_csn
+        assert supplier_schema_csn != consumer_schema_csn
 
     # Check the error log of the supplier does not contain an error
     # This message may happen during the learning phase
     regex = re.compile(r"must not be overwritten \(set replication log for additional info\)")
-    res = pattern_errorlog(topology_m1c1.ms["master1"].errorlog_file, regex)
+    res = pattern_errorlog(topology_m1c1.ms["supplier1"].errorlog_file, regex)
 
 
 @pytest.mark.ds47490
@@ -641,10 +641,10 @@ def test_schema_replication_nine(topology_m1c1, schema_replication_init):
     schema is  not pushed and there is message in the error log
 
     :id: 851b24c6-b1e0-466f-9714-aa2940fbfeeb
-    :setup: Master Consumer, check if a warning message is logged in the
+    :setup: Supplier Consumer, check if a warning message is logged in the
             error log of the supplier and add a test entry to trigger attempt of schema push.
     :steps:
-        1. Add postOfficeBox to 'consumerNewOCA' on the master
+        1. Add postOfficeBox to 'consumerNewOCA' on the supplier
         3. Push the Schema
         4. Check the schemaCSN was updated on the consumer
         5. Check the error log of the supplier does contain an error
@@ -656,37 +656,37 @@ def test_schema_replication_nine(topology_m1c1, schema_replication_init):
         4. Operation should be successful
         5. Operation should be successful
         6. State at startup
-            - supplier +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - supplier +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
                        +may=postOfficeBox                                     +may=postOfficeBox
-            - consumer +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - consumer +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
                        +may=postOfficeBox                  +may=postOfficeBox
            Final state
-            - supplier +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - supplier +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
                        +may=postOfficeBox                  +may=postOfficeBox +may=postOfficeBox
-            - consumer +masterNewOCA     +masterNewOCB     +consumerNewOCA    +masterNewOCC
+            - consumer +supplierNewOCA     +supplierNewOCB     +consumerNewOCA    +supplierNewOCC
                        +must=telexnumber                   +must=telexnumber
                        +may=postOfficeBox                  +may=postOfficeBox +may=postOfficeBox
     """
     _header(topology_m1c1, "Same OC - extra MAY: Schema is pushed - no error")
 
-    mod_OC(topology_m1c1.ms["master1"], 1, 'consumerNewOCA', old_must=MUST_NEW, new_must=MUST_NEW, old_may=MAY_OLD,
+    mod_OC(topology_m1c1.ms["supplier1"], 1, 'consumerNewOCA', old_must=MUST_NEW, new_must=MUST_NEW, old_may=MAY_OLD,
            new_may=MAY_NEW)
 
     trigger_schema_push(topology_m1c1)
-    master_schema_csn = topology_m1c1.ms["master1"].schema.get_schema_csn()
+    supplier_schema_csn = topology_m1c1.ms["supplier1"].schema.get_schema_csn()
     consumer_schema_csn = topology_m1c1.cs["consumer1"].schema.get_schema_csn()
 
     # Check the schemaCSN was updated on the consumer
-    log.debug("test_schema_replication_nine master_schema_csn=%s", master_schema_csn)
+    log.debug("test_schema_replication_nine supplier_schema_csn=%s", supplier_schema_csn)
     log.debug("ctest_schema_replication_nine onsumer_schema_csn=%s", consumer_schema_csn)
-    assert master_schema_csn == consumer_schema_csn
+    assert supplier_schema_csn == consumer_schema_csn
 
     # Check the error log of the supplier does not contain an error
     regex = re.compile(r"must not be overwritten \(set replication log for additional info\)")
-    res = pattern_errorlog(topology_m1c1.ms["master1"].errorlog_file, regex)
+    res = pattern_errorlog(topology_m1c1.ms["supplier1"].errorlog_file, regex)
     if res is not None:
         assert False
 

+ 16 - 16
dirsrvtests/tests/suites/state/mmt_state_test.py

@@ -34,7 +34,7 @@ def _check_user_oper_attrs(topo, tuser, attr_name, attr_value, oper_type, exp_va
     """Check if list of operational attributes present for a given entry"""
 
     log.info('Checking if operational attrs vucsn, adcsn and vdcsn present for: {}'.format(tuser))
-    entry = topo.ms["master1"].search_s(tuser.dn, ldap.SCOPE_BASE, 'objectclass=*',['nscpentrywsi'])
+    entry = topo.ms["supplier1"].search_s(tuser.dn, ldap.SCOPE_BASE, 'objectclass=*',['nscpentrywsi'])
     if oper_attr:
         for line in str(entry).split('\n'):
             if attr_name + ';' in line:
@@ -59,8 +59,8 @@ def test_check_desc_attr_state(topo, attr_name, attr_value, oper_type, exp_value
 
     :id: f0830538-02cf-11e9-8be0-8c16451d917b
     :parametrized: yes
-    :setup: Replication with two masters.
-    :steps: 1. Add user to Master1 without description attribute.
+    :setup: Replication with two suppliers.
+    :steps: 1. Add user to Supplier1 without description attribute.
             2. Add description attribute to user.
             3. Check if only one description attribute exist.
             4. Check if operational attribute vucsn exist.
@@ -97,7 +97,7 @@ def test_check_desc_attr_state(topo, attr_name, attr_value, oper_type, exp_value
 
     test_entry = 'state1test'
     log.info('Add user: {}'.format(test_entry))
-    users = UserAccounts(topo.ms['master1'], DEFAULT_SUFFIX)
+    users = UserAccounts(topo.ms['supplier1'], DEFAULT_SUFFIX)
     try:
         tuser = users.get(test_entry)
     except ldap.NO_SUCH_OBJECT:
@@ -123,8 +123,8 @@ def test_check_cn_attr_state(topo, attr_name, attr_value, oper_type, exp_values,
 
     :id: 19614bae-02d0-11e9-a295-8c16451d917b
     :parametrized: yes
-    :setup: Replication with two masters.
-    :steps: 1. Add user to Master1 with cn attribute.
+    :setup: Replication with two suppliers.
+    :steps: 1. Add user to Supplier1 with cn attribute.
             2. Add a new cn attribute to user.
             3. Check if two cn attributes exist.
             4. Check if operational attribute vucsn exist for each cn attribute.
@@ -151,7 +151,7 @@ def test_check_cn_attr_state(topo, attr_name, attr_value, oper_type, exp_values,
 
     test_entry = 'TestCNusr1'
     log.info('Add user: {}'.format(test_entry))
-    users = UserAccounts(topo.ms['master1'], DEFAULT_SUFFIX)
+    users = UserAccounts(topo.ms['supplier1'], DEFAULT_SUFFIX)
     try:
         tuser = users.get(test_entry)
     except ldap.NO_SUCH_OBJECT:
@@ -182,8 +182,8 @@ def test_check_single_value_attr_state(topo, attr_name, attr_value, oper_type,
 
     :id: 22fd645e-02d0-11e9-a9e4-8c16451d917b
     :parametrized: yes
-    :setup: Replication with two masters.
-    :steps: 1. Add user to Master1 without preferredlanguage attribute.
+    :setup: Replication with two suppliers.
+    :steps: 1. Add user to Supplier1 without preferredlanguage attribute.
             2. Add a new preferredlanguage attribute to user.
             3. Check if one preferredlanguage attributes exist.
             4. Check if operational attribute vucsn exist.
@@ -204,7 +204,7 @@ def test_check_single_value_attr_state(topo, attr_name, attr_value, oper_type,
 
     test_entry = 'Langusr1'
     log.info('Add user: {}'.format(test_entry))
-    users = UserAccounts(topo.ms['master1'], DEFAULT_SUFFIX)
+    users = UserAccounts(topo.ms['supplier1'], DEFAULT_SUFFIX)
     try:
         tuser = users.get(test_entry)
     except ldap.NO_SUCH_OBJECT:
@@ -236,8 +236,8 @@ def test_check_subtype_attr_state(topo, attr_name, attr_value, oper_type, exp_va
 
     :id: 29ab87a4-02d0-11e9-b104-8c16451d917b
     :parametrized: yes
-    :setup: Replication with two masters.
-    :steps: 1. Add user to Master1 without roomnumber;office attribute.
+    :setup: Replication with two suppliers.
+    :steps: 1. Add user to Supplier1 without roomnumber;office attribute.
             2. Add roomnumber;office attribute to user.
             3. Check if only one roomnumber;office attribute exist.
             4. Check if operational attribute vucsn exist.
@@ -274,7 +274,7 @@ def test_check_subtype_attr_state(topo, attr_name, attr_value, oper_type, exp_va
 
     test_entry = 'roomoffice1usr'
     log.info('Add user: {}'.format(test_entry))
-    users = UserAccounts(topo.ms['master1'], DEFAULT_SUFFIX)
+    users = UserAccounts(topo.ms['supplier1'], DEFAULT_SUFFIX)
     try:
         tuser = users.get(test_entry)
     except ldap.NO_SUCH_OBJECT:
@@ -302,8 +302,8 @@ def test_check_jpeg_attr_state(topo, attr_name, attr_value, oper_type, exp_value
 
     :id: 312ac0d0-02d0-11e9-9d34-8c16451d917b
     :parametrized: yes
-    :setup: Replication with two masters.
-    :steps: 1. Add user to Master1 without jpegphoto attribute.
+    :setup: Replication with two suppliers.
+    :steps: 1. Add user to Supplier1 without jpegphoto attribute.
             2. Add jpegphoto attribute to user.
             3. Check if only one jpegphoto attribute exist.
             4. Check if operational attribute vucsn exist.
@@ -340,7 +340,7 @@ def test_check_jpeg_attr_state(topo, attr_name, attr_value, oper_type, exp_value
 
     test_entry = 'testJpeg1usr'
     log.info('Add user: {}'.format(test_entry))
-    users = UserAccounts(topo.ms['master1'], DEFAULT_SUFFIX)
+    users = UserAccounts(topo.ms['supplier1'], DEFAULT_SUFFIX)
     try:
         tuser = users.get(test_entry)
     except ldap.NO_SUCH_OBJECT:

+ 2 - 2
dirsrvtests/tests/suites/syncrepl_plugin/basic_test.py

@@ -533,7 +533,7 @@ def test_sync_repl_cenotaph(topo_m2, request):
        sync repl client is running
 
     :id: 8ca1724a-cf42-4880-bf0f-be451f9bd3b4
-    :setup: MMR with 2 masters
+    :setup: MMR with 2 suppliers
     :steps:
         1. Enable retroCL/content_sync
         2. Run a sync repl client
@@ -547,7 +547,7 @@ def test_sync_repl_cenotaph(topo_m2, request):
         4. Should succeeds
         5. Should succeeds
     """
-    m1 = topo_m2.ms["master1"]
+    m1 = topo_m2.ms["supplier1"]
     # Enable/configure retroCL
     plugin = RetroChangelogPlugin(m1)
     plugin.disable()

+ 11 - 11
dirsrvtests/tests/suites/vlv/regression_test.py

@@ -29,14 +29,14 @@ def test_bulk_import_when_the_backend_with_vlv_was_recreated(topology_m2):
     If the test passes without the server crash, 47966 is verified.
 
     :id: 512963fa-fe02-11e8-b1d3-8c16451d917b
-    :setup: Replication with two masters.
+    :setup: Replication with two suppliers.
     :steps:
         1. Generate vlvSearch entry
         2. Generate vlvIndex entry
-        3. Delete the backend instance on Master 2
+        3. Delete the backend instance on Supplier 2
         4. Delete the agreement, replica, and mapping tree, too.
-        5. Recreate the backend and the VLV index on Master 2.
-        6. Recreating vlvSrchDn and vlvIndexDn on Master 2.
+        5. Recreate the backend and the VLV index on Supplier 2.
+        6. Recreating vlvSrchDn and vlvIndexDn on Supplier 2.
     :expectedresults:
         1. Should Success.
         2. Should Success.
@@ -45,8 +45,8 @@ def test_bulk_import_when_the_backend_with_vlv_was_recreated(topology_m2):
         5. Should Success.
         6. Should Success.
     """
-    M1 = topology_m2.ms["master1"]
-    M2 = topology_m2.ms["master2"]
+    M1 = topology_m2.ms["supplier1"]
+    M2 = topology_m2.ms["supplier2"]
     # generate vlvSearch entry
     properties_for_search = {
         "objectclass": ["top", "vlvSearch"],
@@ -75,18 +75,18 @@ def test_bulk_import_when_the_backend_with_vlv_was_recreated(topology_m2):
     )
     assert "cn=vlvIdx,cn=vlvSrch,cn=userRoot,cn=ldbm database,cn=plugins,cn=config" in M2.getEntry(
         "cn=vlvIdx,cn=vlvSrch,cn=userRoot,cn=ldbm database,cn=plugins,cn=config").dn
-    # Delete the backend instance on Master 2."
+    # Delete the backend instance on Supplier 2."
     userroot_index.delete()
     userroot_vlvsearch.delete_all()
     # delete the agreement, replica, and mapping tree, too.
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    repl.remove_master(M2)
+    repl.remove_supplier(M2)
     MappingTrees(M2).list()[0].delete()
     Backends(M2).list()[0].delete()
-    # Recreate the backend and the VLV index on Master 2.
+    # Recreate the backend and the VLV index on Supplier 2.
     M2.backend.create(DEFAULT_SUFFIX, {BACKEND_NAME: "userRoot"})
     M2.mappingtree.create(DEFAULT_SUFFIX, "userRoot")
-    # Recreating vlvSrchDn and vlvIndexDn on Master 2.
+    # Recreating vlvSrchDn and vlvIndexDn on Supplier 2.
     vlv_searches.create(
         basedn="cn=userRoot,cn=ldbm database,cn=plugins,cn=config",
         properties=properties_for_search,
@@ -96,7 +96,7 @@ def test_bulk_import_when_the_backend_with_vlv_was_recreated(topology_m2):
         properties=properties_for_index,
     )
     M2.restart()
-    repl.join_master(M1, M2)
+    repl.join_supplier(M1, M2)
     repl.test_replication(M1, M2, 30)
     repl.test_replication(M2, M1, 30)
     entries = M2.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, "(cn=*)")

+ 18 - 18
dirsrvtests/tests/tickets/ticket47573_test.py

@@ -96,7 +96,7 @@ def trigger_schema_push(topology_m1c1):
     except AttributeError:
         trigger_schema_push.value = 1
     replace = [(ldap.MOD_REPLACE, 'telephonenumber', ensure_bytes(str(trigger_schema_push.value)))]
-    topology_m1c1.ms["master1"].modify_s(ENTRY_DN, replace)
+    topology_m1c1.ms["supplier1"].modify_s(ENTRY_DN, replace)
 
     # wait 10 seconds that the update is replicated
     loop = 0
@@ -120,14 +120,14 @@ def test_ticket47573_init(topology_m1c1):
     """
         Initialize the test environment
     """
-    log.debug("test_ticket47573_init topology_m1c1 %r (master %r, consumer %r" %
-              (topology_m1c1, topology_m1c1.ms["master1"], topology_m1c1.cs["consumer1"]))
+    log.debug("test_ticket47573_init topology_m1c1 %r (supplier %r, consumer %r" %
+              (topology_m1c1, topology_m1c1.ms["supplier1"], topology_m1c1.cs["consumer1"]))
     # the test case will check if a warning message is logged in the
     # error log of the supplier
-    topology_m1c1.ms["master1"].errorlog_file = open(topology_m1c1.ms["master1"].errlog, "r")
+    topology_m1c1.ms["supplier1"].errorlog_file = open(topology_m1c1.ms["supplier1"].errlog, "r")
 
     # This entry will be used to trigger attempt of schema push
-    topology_m1c1.ms["master1"].add_s(Entry((ENTRY_DN, {
+    topology_m1c1.ms["supplier1"].add_s(Entry((ENTRY_DN, {
         'objectclass': "top person".split(),
         'sn': 'test_entry',
         'cn': 'test_entry'})))
@@ -144,27 +144,27 @@ def test_ticket47573_one(topology_m1c1):
             - consumer +OCwithMayAttr
 
     """
-    log.debug("test_ticket47573_one topology_m1c1 %r (master %r, consumer %r" % (
-    topology_m1c1, topology_m1c1.ms["master1"], topology_m1c1.cs["consumer1"]))
+    log.debug("test_ticket47573_one topology_m1c1 %r (supplier %r, consumer %r" % (
+    topology_m1c1, topology_m1c1.ms["supplier1"], topology_m1c1.cs["consumer1"]))
     # update the schema of the supplier so that it is a superset of
     # consumer. Schema should be pushed
     new_oc = _oc_definition(2, 'OCwithMayAttr',
                             must=MUST_OLD,
                             may=MAY_OLD)
-    topology_m1c1.ms["master1"].schema.add_schema('objectClasses', new_oc)
+    topology_m1c1.ms["supplier1"].schema.add_schema('objectClasses', new_oc)
 
     trigger_schema_push(topology_m1c1)
-    master_schema_csn = topology_m1c1.ms["master1"].schema.get_schema_csn()
+    supplier_schema_csn = topology_m1c1.ms["supplier1"].schema.get_schema_csn()
     consumer_schema_csn = topology_m1c1.cs["consumer1"].schema.get_schema_csn()
 
     # Check the schemaCSN was updated on the consumer
-    log.debug("test_ticket47573_one master_schema_csn=%s", master_schema_csn)
+    log.debug("test_ticket47573_one supplier_schema_csn=%s", supplier_schema_csn)
     log.debug("ctest_ticket47573_one onsumer_schema_csn=%s", consumer_schema_csn)
-    assert master_schema_csn == consumer_schema_csn
+    assert supplier_schema_csn == consumer_schema_csn
 
     # Check the error log of the supplier does not contain an error
     regex = re.compile("must not be overwritten \(set replication log for additional info\)")
-    res = pattern_errorlog(topology_m1c1.ms["master1"].errorlog_file, regex)
+    res = pattern_errorlog(topology_m1c1.ms["supplier1"].errorlog_file, regex)
     assert res is None
 
 
@@ -180,22 +180,22 @@ def test_ticket47573_two(topology_m1c1):
     """
 
     # Update the objectclass so that a MAY attribute is moved to MUST attribute
-    mod_OC(topology_m1c1.ms["master1"], 2, 'OCwithMayAttr', old_must=MUST_OLD, new_must=MUST_NEW, old_may=MAY_OLD,
+    mod_OC(topology_m1c1.ms["supplier1"], 2, 'OCwithMayAttr', old_must=MUST_OLD, new_must=MUST_NEW, old_may=MAY_OLD,
            new_may=MAY_NEW)
 
     # now push the scheam
     trigger_schema_push(topology_m1c1)
-    master_schema_csn = topology_m1c1.ms["master1"].schema.get_schema_csn()
+    supplier_schema_csn = topology_m1c1.ms["supplier1"].schema.get_schema_csn()
     consumer_schema_csn = topology_m1c1.cs["consumer1"].schema.get_schema_csn()
 
     # Check the schemaCSN was NOT updated on the consumer
-    log.debug("test_ticket47573_two master_schema_csn=%s", master_schema_csn)
+    log.debug("test_ticket47573_two supplier_schema_csn=%s", supplier_schema_csn)
     log.debug("test_ticket47573_two consumer_schema_csn=%s", consumer_schema_csn)
-    assert master_schema_csn == consumer_schema_csn
+    assert supplier_schema_csn == consumer_schema_csn
 
     # Check the error log of the supplier does not contain an error
     regex = re.compile("must not be overwritten \(set replication log for additional info\)")
-    res = pattern_errorlog(topology_m1c1.ms["master1"].errorlog_file, regex)
+    res = pattern_errorlog(topology_m1c1.ms["supplier1"].errorlog_file, regex)
     assert res is None
 
 
@@ -205,7 +205,7 @@ def test_ticket47573_three(topology_m1c1):
     '''
     # Check replication is working fine
     dn = "cn=ticket47573, %s" % SUFFIX
-    topology_m1c1.ms["master1"].add_s(Entry((dn,
+    topology_m1c1.ms["supplier1"].add_s(Entry((dn,
                                              {'objectclass': "top person OCwithMayAttr".split(),
                                               'sn': 'test_repl',
                                               'cn': 'test_repl',

+ 14 - 14
dirsrvtests/tests/tickets/ticket47619_test.py

@@ -39,39 +39,39 @@ def test_ticket47619_init(topology_m1c1):
     """
         Initialize the test environment
     """
-    topology_m1c1.ms["master1"].plugins.enable(name=PLUGIN_RETRO_CHANGELOG)
-    # topology_m1c1.ms["master1"].plugins.enable(name=PLUGIN_MEMBER_OF)
-    # topology_m1c1.ms["master1"].plugins.enable(name=PLUGIN_REFER_INTEGRITY)
-    topology_m1c1.ms["master1"].stop(timeout=10)
-    topology_m1c1.ms["master1"].start(timeout=10)
+    topology_m1c1.ms["supplier1"].plugins.enable(name=PLUGIN_RETRO_CHANGELOG)
+    # topology_m1c1.ms["supplier1"].plugins.enable(name=PLUGIN_MEMBER_OF)
+    # topology_m1c1.ms["supplier1"].plugins.enable(name=PLUGIN_REFER_INTEGRITY)
+    topology_m1c1.ms["supplier1"].stop(timeout=10)
+    topology_m1c1.ms["supplier1"].start(timeout=10)
 
-    topology_m1c1.ms["master1"].log.info("test_ticket47619_init topology_m1c1 %r" % (topology_m1c1))
+    topology_m1c1.ms["supplier1"].log.info("test_ticket47619_init topology_m1c1 %r" % (topology_m1c1))
     # the test case will check if a warning message is logged in the
     # error log of the supplier
-    topology_m1c1.ms["master1"].errorlog_file = open(topology_m1c1.ms["master1"].errlog, "r")
+    topology_m1c1.ms["supplier1"].errorlog_file = open(topology_m1c1.ms["supplier1"].errlog, "r")
 
     # add dummy entries
     for cpt in range(MAX_OTHERS):
         name = "%s%d" % (OTHER_NAME, cpt)
-        topology_m1c1.ms["master1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
+        topology_m1c1.ms["supplier1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
             'objectclass': "top person".split(),
             'sn': name,
             'cn': name})))
 
-    topology_m1c1.ms["master1"].log.info(
+    topology_m1c1.ms["supplier1"].log.info(
         "test_ticket47619_init: %d entries ADDed %s[0..%d]" % (MAX_OTHERS, OTHER_NAME, MAX_OTHERS - 1))
 
     # Check the number of entries in the retro changelog
     time.sleep(2)
-    ents = topology_m1c1.ms["master1"].search_s(RETROCL_SUFFIX, ldap.SCOPE_ONELEVEL, "(objectclass=*)")
+    ents = topology_m1c1.ms["supplier1"].search_s(RETROCL_SUFFIX, ldap.SCOPE_ONELEVEL, "(objectclass=*)")
     assert len(ents) == MAX_OTHERS
 
 
 def test_ticket47619_create_index(topology_m1c1):
     args = {INDEX_TYPE: 'eq'}
     for attr in ATTRIBUTES:
-        topology_m1c1.ms["master1"].index.create(suffix=RETROCL_SUFFIX, attr=attr, args=args)
-    topology_m1c1.ms["master1"].restart(timeout=10)
+        topology_m1c1.ms["supplier1"].index.create(suffix=RETROCL_SUFFIX, attr=attr, args=args)
+    topology_m1c1.ms["supplier1"].restart(timeout=10)
 
 
 def test_ticket47619_reindex(topology_m1c1):
@@ -80,13 +80,13 @@ def test_ticket47619_reindex(topology_m1c1):
     '''
     args = {TASK_WAIT: True}
     for attr in ATTRIBUTES:
-        rc = topology_m1c1.ms["master1"].tasks.reindex(suffix=RETROCL_SUFFIX, attrname=attr, args=args)
+        rc = topology_m1c1.ms["supplier1"].tasks.reindex(suffix=RETROCL_SUFFIX, attrname=attr, args=args)
         assert rc == 0
 
 
 def test_ticket47619_check_indexed_search(topology_m1c1):
     for attr in ATTRIBUTES:
-        ents = topology_m1c1.ms["master1"].search_s(RETROCL_SUFFIX, ldap.SCOPE_SUBTREE, "(%s=hello)" % attr)
+        ents = topology_m1c1.ms["supplier1"].search_s(RETROCL_SUFFIX, ldap.SCOPE_SUBTREE, "(%s=hello)" % attr)
         assert len(ents) == 0
 
 

+ 73 - 73
dirsrvtests/tests/tickets/ticket47653MMR_test.py

@@ -69,13 +69,13 @@ def test_ticket47653_init(topology_m2):
 
     """
 
-    topology_m2.ms["master1"].log.info("Add %s that allows 'member' attribute" % OC_NAME)
+    topology_m2.ms["supplier1"].log.info("Add %s that allows 'member' attribute" % OC_NAME)
     new_oc = _oc_definition(2, OC_NAME, must=MUST, may=MAY)
-    topology_m2.ms["master1"].schema.add_schema('objectClasses', new_oc)
+    topology_m2.ms["supplier1"].schema.add_schema('objectClasses', new_oc)
 
     # entry used to bind with
-    topology_m2.ms["master1"].log.info("Add %s" % BIND_DN)
-    topology_m2.ms["master1"].add_s(Entry((BIND_DN, {
+    topology_m2.ms["supplier1"].log.info("Add %s" % BIND_DN)
+    topology_m2.ms["supplier1"].add_s(Entry((BIND_DN, {
         'objectclass': "top person".split(),
         'sn': BIND_NAME,
         'cn': BIND_NAME,
@@ -84,18 +84,18 @@ def test_ticket47653_init(topology_m2):
     if DEBUGGING:
         # enable acl error logging
         mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', ensure_bytes(str(128 + 8192)))]  # ACL + REPL
-        topology_m2.ms["master1"].modify_s(DN_CONFIG, mod)
-        topology_m2.ms["master2"].modify_s(DN_CONFIG, mod)
+        topology_m2.ms["supplier1"].modify_s(DN_CONFIG, mod)
+        topology_m2.ms["supplier2"].modify_s(DN_CONFIG, mod)
 
     # remove all aci's and start with a clean slate
     mod = [(ldap.MOD_DELETE, 'aci', None)]
-    topology_m2.ms["master1"].modify_s(SUFFIX, mod)
-    topology_m2.ms["master2"].modify_s(SUFFIX, mod)
+    topology_m2.ms["supplier1"].modify_s(SUFFIX, mod)
+    topology_m2.ms["supplier2"].modify_s(SUFFIX, mod)
 
     # add dummy entries
     for cpt in range(MAX_OTHERS):
         name = "%s%d" % (OTHER_NAME, cpt)
-        topology_m2.ms["master1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
+        topology_m2.ms["supplier1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
             'objectclass': "top person".split(),
             'sn': name,
             'cn': name})))
@@ -103,19 +103,19 @@ def test_ticket47653_init(topology_m2):
 
 def test_ticket47653_add(topology_m2):
     '''
-        This test ADD an entry on MASTER1 where 47653 is fixed. Then it checks that entry is replicated
-        on MASTER2 (even if on MASTER2 47653 is NOT fixed). Then update on MASTER2 and check the update on MASTER1
+        This test ADD an entry on SUPPLIER1 where 47653 is fixed. Then it checks that entry is replicated
+        on SUPPLIER2 (even if on SUPPLIER2 47653 is NOT fixed). Then update on SUPPLIER2 and check the update on SUPPLIER1
 
         It checks that, bound as bind_entry,
             - we can not ADD an entry without the proper SELFDN aci.
             - with the proper ACI we can not ADD with 'member' attribute
             - with the proper ACI and 'member' it succeeds to ADD
     '''
-    topology_m2.ms["master1"].log.info("\n\n######################### ADD ######################\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######################### ADD ######################\n")
 
     # bind as bind_entry
-    topology_m2.ms["master1"].log.info("Bind as %s" % BIND_DN)
-    topology_m2.ms["master1"].simple_bind_s(BIND_DN, BIND_PW)
+    topology_m2.ms["supplier1"].log.info("Bind as %s" % BIND_DN)
+    topology_m2.ms["supplier1"].simple_bind_s(BIND_DN, BIND_PW)
 
     # Prepare the entry with multivalued members
     entry_with_members = Entry(ENTRY_DN)
@@ -144,16 +144,16 @@ def test_ticket47653_add(topology_m2):
 
     # entry to add WITH member being BIND_DN but WITHOUT the ACI -> ldap.INSUFFICIENT_ACCESS
     try:
-        topology_m2.ms["master1"].log.info("Try to add Add  %s (aci is missing): %r" % (ENTRY_DN, entry_with_member))
+        topology_m2.ms["supplier1"].log.info("Try to add Add  %s (aci is missing): %r" % (ENTRY_DN, entry_with_member))
 
-        topology_m2.ms["master1"].add_s(entry_with_member)
+        topology_m2.ms["supplier1"].add_s(entry_with_member)
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     # Ok Now add the proper ACI
-    topology_m2.ms["master1"].log.info("Bind as %s and add the ADD SELFDN aci" % DN_DM)
-    topology_m2.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].log.info("Bind as %s and add the ADD SELFDN aci" % DN_DM)
+    topology_m2.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
 
     ACI_TARGET = "(target = \"ldap:///cn=*,%s\")" % SUFFIX
     ACI_TARGETFILTER = "(targetfilter =\"(objectClass=%s)\")" % OC_NAME
@@ -161,70 +161,70 @@ def test_ticket47653_add(topology_m2):
     ACI_SUBJECT = " userattr = \"member#selfDN\";)"
     ACI_BODY = ACI_TARGET + ACI_TARGETFILTER + ACI_ALLOW + ACI_SUBJECT
     mod = [(ldap.MOD_ADD, 'aci', ensure_bytes(ACI_BODY))]
-    topology_m2.ms["master1"].modify_s(SUFFIX, mod)
+    topology_m2.ms["supplier1"].modify_s(SUFFIX, mod)
     time.sleep(1)
 
     # bind as bind_entry
-    topology_m2.ms["master1"].log.info("Bind as %s" % BIND_DN)
-    topology_m2.ms["master1"].simple_bind_s(BIND_DN, BIND_PW)
+    topology_m2.ms["supplier1"].log.info("Bind as %s" % BIND_DN)
+    topology_m2.ms["supplier1"].simple_bind_s(BIND_DN, BIND_PW)
 
     # entry to add WITHOUT member and WITH the ACI -> ldap.INSUFFICIENT_ACCESS
     try:
-        topology_m2.ms["master1"].log.info("Try to add Add  %s (member is missing)" % ENTRY_DN)
-        topology_m2.ms["master1"].add_s(Entry((ENTRY_DN, {
+        topology_m2.ms["supplier1"].log.info("Try to add Add  %s (member is missing)" % ENTRY_DN)
+        topology_m2.ms["supplier1"].add_s(Entry((ENTRY_DN, {
             'objectclass': ENTRY_OC.split(),
             'sn': ENTRY_NAME,
             'cn': ENTRY_NAME,
             'postalAddress': 'here',
             'postalCode': '1234'})))
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
     time.sleep(1)
 
     # entry to add WITH memberS and WITH the ACI -> ldap.INSUFFICIENT_ACCESS
     # member should contain only one value
     try:
-        topology_m2.ms["master1"].log.info("Try to add Add  %s (with several member values)" % ENTRY_DN)
-        topology_m2.ms["master1"].add_s(entry_with_members)
+        topology_m2.ms["supplier1"].log.info("Try to add Add  %s (with several member values)" % ENTRY_DN)
+        topology_m2.ms["supplier1"].add_s(entry_with_members)
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
     time.sleep(2)
 
-    topology_m2.ms["master1"].log.info("Try to add Add  %s should be successful" % ENTRY_DN)
+    topology_m2.ms["supplier1"].log.info("Try to add Add  %s should be successful" % ENTRY_DN)
     try:
-        topology_m2.ms["master1"].add_s(entry_with_member)
+        topology_m2.ms["supplier1"].add_s(entry_with_member)
     except ldap.LDAPError as e:
-        topology_m2.ms["master1"].log.info("Failed to add entry,  error: " + e.message['desc'])
+        topology_m2.ms["supplier1"].log.info("Failed to add entry,  error: " + e.message['desc'])
         assert False
 
     #
     # Now check the entry as been replicated
     #
-    topology_m2.ms["master2"].simple_bind_s(DN_DM, PASSWORD)
-    topology_m2.ms["master1"].log.info("Try to retrieve %s from Master2" % ENTRY_DN)
+    topology_m2.ms["supplier2"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].log.info("Try to retrieve %s from Supplier2" % ENTRY_DN)
     loop = 0
     while loop <= 10:
         try:
-            ent = topology_m2.ms["master2"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
+            ent = topology_m2.ms["supplier2"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
             break
         except ldap.NO_SUCH_OBJECT:
             time.sleep(1)
             loop += 1
     assert loop <= 10
 
-    # Now update the entry on Master2 (as DM because 47653 is possibly not fixed on M2)
-    topology_m2.ms["master1"].log.info("Update  %s on M2" % ENTRY_DN)
+    # Now update the entry on Supplier2 (as DM because 47653 is possibly not fixed on M2)
+    topology_m2.ms["supplier1"].log.info("Update  %s on M2" % ENTRY_DN)
     mod = [(ldap.MOD_REPLACE, 'description', b'test_add')]
-    topology_m2.ms["master2"].modify_s(ENTRY_DN, mod)
+    topology_m2.ms["supplier2"].modify_s(ENTRY_DN, mod)
     time.sleep(1)
 
-    topology_m2.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
     loop = 0
     while loop <= 10:
         try:
-            ent = topology_m2.ms["master1"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
+            ent = topology_m2.ms["supplier1"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
             if ent.hasAttr('description') and (ensure_str(ent.getValue('description')) == 'test_add'):
                 break
         except ldap.NO_SUCH_OBJECT:
@@ -236,32 +236,32 @@ def test_ticket47653_add(topology_m2):
 
 def test_ticket47653_modify(topology_m2):
     '''
-        This test MOD an entry on MASTER1 where 47653 is fixed. Then it checks that update is replicated
-        on MASTER2 (even if on MASTER2 47653 is NOT fixed). Then update on MASTER2 (bound as BIND_DN).
-        This update may fail whether or not 47653 is fixed on MASTER2
+        This test MOD an entry on SUPPLIER1 where 47653 is fixed. Then it checks that update is replicated
+        on SUPPLIER2 (even if on SUPPLIER2 47653 is NOT fixed). Then update on SUPPLIER2 (bound as BIND_DN).
+        This update may fail whether or not 47653 is fixed on SUPPLIER2
 
         It checks that, bound as bind_entry,
             - we can not modify an entry without the proper SELFDN aci.
             - adding the ACI, we can modify the entry
     '''
     # bind as bind_entry
-    topology_m2.ms["master1"].log.info("Bind as %s" % BIND_DN)
-    topology_m2.ms["master1"].simple_bind_s(BIND_DN, BIND_PW)
+    topology_m2.ms["supplier1"].log.info("Bind as %s" % BIND_DN)
+    topology_m2.ms["supplier1"].simple_bind_s(BIND_DN, BIND_PW)
 
-    topology_m2.ms["master1"].log.info("\n\n######################### MODIFY ######################\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######################### MODIFY ######################\n")
 
     # entry to modify WITH member being BIND_DN but WITHOUT the ACI -> ldap.INSUFFICIENT_ACCESS
     try:
-        topology_m2.ms["master1"].log.info("Try to modify  %s (aci is missing)" % ENTRY_DN)
+        topology_m2.ms["supplier1"].log.info("Try to modify  %s (aci is missing)" % ENTRY_DN)
         mod = [(ldap.MOD_REPLACE, 'postalCode', b'9876')]
-        topology_m2.ms["master1"].modify_s(ENTRY_DN, mod)
+        topology_m2.ms["supplier1"].modify_s(ENTRY_DN, mod)
     except Exception as e:
-        topology_m2.ms["master1"].log.info("Exception (expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("Exception (expected): %s" % type(e).__name__)
         assert isinstance(e, ldap.INSUFFICIENT_ACCESS)
 
     # Ok Now add the proper ACI
-    topology_m2.ms["master1"].log.info("Bind as %s and add the WRITE SELFDN aci" % DN_DM)
-    topology_m2.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].log.info("Bind as %s and add the WRITE SELFDN aci" % DN_DM)
+    topology_m2.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
 
     ACI_TARGET = "(target = \"ldap:///cn=*,%s\")" % SUFFIX
     ACI_TARGETATTR = "(targetattr = *)"
@@ -270,35 +270,35 @@ def test_ticket47653_modify(topology_m2):
     ACI_SUBJECT = " userattr = \"member#selfDN\";)"
     ACI_BODY = ACI_TARGET + ACI_TARGETATTR + ACI_TARGETFILTER + ACI_ALLOW + ACI_SUBJECT
     mod = [(ldap.MOD_ADD, 'aci', ensure_bytes(ACI_BODY))]
-    topology_m2.ms["master1"].modify_s(SUFFIX, mod)
+    topology_m2.ms["supplier1"].modify_s(SUFFIX, mod)
     time.sleep(2)
 
     # bind as bind_entry
-    topology_m2.ms["master1"].log.info("M1: Bind as %s" % BIND_DN)
-    topology_m2.ms["master1"].simple_bind_s(BIND_DN, BIND_PW)
+    topology_m2.ms["supplier1"].log.info("M1: Bind as %s" % BIND_DN)
+    topology_m2.ms["supplier1"].simple_bind_s(BIND_DN, BIND_PW)
     time.sleep(1)
 
     # modify the entry and checks the value
-    topology_m2.ms["master1"].log.info("M1: Try to modify  %s. It should succeeds" % ENTRY_DN)
+    topology_m2.ms["supplier1"].log.info("M1: Try to modify  %s. It should succeeds" % ENTRY_DN)
     mod = [(ldap.MOD_REPLACE, 'postalCode', b'1928')]
-    topology_m2.ms["master1"].modify_s(ENTRY_DN, mod)
+    topology_m2.ms["supplier1"].modify_s(ENTRY_DN, mod)
 
-    topology_m2.ms["master1"].log.info("M1: Bind as %s" % DN_DM)
-    topology_m2.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].log.info("M1: Bind as %s" % DN_DM)
+    topology_m2.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
 
-    topology_m2.ms["master1"].log.info("M1: Check the update of %s" % ENTRY_DN)
-    ents = topology_m2.ms["master1"].search_s(ENTRY_DN, ldap.SCOPE_BASE, 'objectclass=*')
+    topology_m2.ms["supplier1"].log.info("M1: Check the update of %s" % ENTRY_DN)
+    ents = topology_m2.ms["supplier1"].search_s(ENTRY_DN, ldap.SCOPE_BASE, 'objectclass=*')
     assert len(ents) == 1
     assert ensure_str(ents[0].postalCode) == '1928'
 
     # Now check the update has been replicated on M2
-    topology_m2.ms["master1"].log.info("M2: Bind as %s" % DN_DM)
-    topology_m2.ms["master2"].simple_bind_s(DN_DM, PASSWORD)
-    topology_m2.ms["master1"].log.info("M2: Try to retrieve %s" % ENTRY_DN)
+    topology_m2.ms["supplier1"].log.info("M2: Bind as %s" % DN_DM)
+    topology_m2.ms["supplier2"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].log.info("M2: Try to retrieve %s" % ENTRY_DN)
     loop = 0
     while loop <= 10:
         try:
-            ent = topology_m2.ms["master2"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
+            ent = topology_m2.ms["supplier2"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
             if ent.hasAttr('postalCode') and (ensure_str(ent.getValue('postalCode')) == '1928'):
                 break
         except ldap.NO_SUCH_OBJECT:
@@ -307,32 +307,32 @@ def test_ticket47653_modify(topology_m2):
     assert loop <= 10
     assert ensure_str(ent.getValue('postalCode')) == '1928'
 
-    # Now update the entry on Master2 bound as BIND_DN (update may fail if  47653 is  not fixed on M2)
-    topology_m2.ms["master1"].log.info("M2: Update  %s (bound as %s)" % (ENTRY_DN, BIND_DN))
-    topology_m2.ms["master2"].simple_bind_s(BIND_DN, PASSWORD)
+    # Now update the entry on Supplier2 bound as BIND_DN (update may fail if  47653 is  not fixed on M2)
+    topology_m2.ms["supplier1"].log.info("M2: Update  %s (bound as %s)" % (ENTRY_DN, BIND_DN))
+    topology_m2.ms["supplier2"].simple_bind_s(BIND_DN, PASSWORD)
     time.sleep(1)
     fail = False
     try:
         mod = [(ldap.MOD_REPLACE, 'postalCode', b'1929')]
-        topology_m2.ms["master2"].modify_s(ENTRY_DN, mod)
+        topology_m2.ms["supplier2"].modify_s(ENTRY_DN, mod)
         fail = False
     except ldap.INSUFFICIENT_ACCESS:
-        topology_m2.ms["master1"].log.info(
+        topology_m2.ms["supplier1"].log.info(
             "M2: Exception (INSUFFICIENT_ACCESS): that is fine the bug is possibly not fixed on M2")
         fail = True
     except Exception as e:
-        topology_m2.ms["master1"].log.info("M2: Exception (not expected): %s" % type(e).__name__)
+        topology_m2.ms["supplier1"].log.info("M2: Exception (not expected): %s" % type(e).__name__)
         assert 0
 
     if not fail:
         # Check the update has been replicaed on M1
-        topology_m2.ms["master1"].log.info("M1: Bind as %s" % DN_DM)
-        topology_m2.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
-        topology_m2.ms["master1"].log.info("M1: Check %s.postalCode=1929)" % (ENTRY_DN))
+        topology_m2.ms["supplier1"].log.info("M1: Bind as %s" % DN_DM)
+        topology_m2.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
+        topology_m2.ms["supplier1"].log.info("M1: Check %s.postalCode=1929)" % (ENTRY_DN))
         loop = 0
         while loop <= 10:
             try:
-                ent = topology_m2.ms["master1"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
+                ent = topology_m2.ms["supplier1"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
                 if ent.hasAttr('postalCode') and (ensure_str(ent.getValue('postalCode')) == '1929'):
                     break
             except ldap.NO_SUCH_OBJECT:

+ 54 - 54
dirsrvtests/tests/tickets/ticket47676_test.py

@@ -72,9 +72,9 @@ def _oc_definition(oid_ext, name, must=None, may=None):
 
 def replication_check(topology_m2):
     repl = ReplicationManager(SUFFIX)
-    master1 = topology_m2.ms["master1"]
-    master2 = topology_m2.ms["master2"]
-    return repl.test_replication(master1, master2)
+    supplier1 = topology_m2.ms["supplier1"]
+    supplier2 = topology_m2.ms["supplier2"]
+    return repl.test_replication(supplier1, supplier2)
 
 def test_ticket47676_init(topology_m2):
     """
@@ -85,13 +85,13 @@ def test_ticket47676_init(topology_m2):
 
     """
 
-    topology_m2.ms["master1"].log.info("Add %s that allows 'member' attribute" % OC_NAME)
+    topology_m2.ms["supplier1"].log.info("Add %s that allows 'member' attribute" % OC_NAME)
     new_oc = _oc_definition(OC_OID_EXT, OC_NAME, must=MUST, may=MAY)
-    topology_m2.ms["master1"].schema.add_schema('objectClasses', new_oc)
+    topology_m2.ms["supplier1"].schema.add_schema('objectClasses', new_oc)
 
     # entry used to bind with
-    topology_m2.ms["master1"].log.info("Add %s" % BIND_DN)
-    topology_m2.ms["master1"].add_s(Entry((BIND_DN, {
+    topology_m2.ms["supplier1"].log.info("Add %s" % BIND_DN)
+    topology_m2.ms["supplier1"].add_s(Entry((BIND_DN, {
         'objectclass': "top person".split(),
         'sn': BIND_NAME,
         'cn': BIND_NAME,
@@ -99,13 +99,13 @@ def test_ticket47676_init(topology_m2):
 
     # enable acl error logging
     mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', ensure_bytes(str(128 + 8192)))]  # ACL + REPL
-    topology_m2.ms["master1"].modify_s(DN_CONFIG, mod)
-    topology_m2.ms["master2"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier1"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier2"].modify_s(DN_CONFIG, mod)
 
     # add dummy entries
     for cpt in range(MAX_OTHERS):
         name = "%s%d" % (OTHER_NAME, cpt)
-        topology_m2.ms["master1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
+        topology_m2.ms["supplier1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
             'objectclass': "top person".split(),
             'sn': name,
             'cn': name})))
@@ -113,15 +113,15 @@ def test_ticket47676_init(topology_m2):
 
 def test_ticket47676_skip_oc_at(topology_m2):
     '''
-        This test ADD an entry on MASTER1 where 47676 is fixed. Then it checks that entry is replicated
-        on MASTER2 (even if on MASTER2 47676 is NOT fixed). Then update on MASTER2.
-        If the schema has successfully been pushed, updating Master2 should succeed
+        This test ADD an entry on SUPPLIER1 where 47676 is fixed. Then it checks that entry is replicated
+        on SUPPLIER2 (even if on SUPPLIER2 47676 is NOT fixed). Then update on SUPPLIER2.
+        If the schema has successfully been pushed, updating Supplier2 should succeed
     '''
-    topology_m2.ms["master1"].log.info("\n\n######################### ADD ######################\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######################### ADD ######################\n")
 
     # bind as 'cn=Directory manager'
-    topology_m2.ms["master1"].log.info("Bind as %s and add the add the entry with specific oc" % DN_DM)
-    topology_m2.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].log.info("Bind as %s and add the add the entry with specific oc" % DN_DM)
+    topology_m2.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
 
     # Prepare the entry with multivalued members
     entry = Entry(ENTRY_DN)
@@ -137,50 +137,50 @@ def test_ticket47676_skip_oc_at(topology_m2):
     members.append(BIND_DN)
     entry.setValues('member', members)
 
-    topology_m2.ms["master1"].log.info("Try to add Add  %s should be successful" % ENTRY_DN)
-    topology_m2.ms["master1"].add_s(entry)
+    topology_m2.ms["supplier1"].log.info("Try to add Add  %s should be successful" % ENTRY_DN)
+    topology_m2.ms["supplier1"].add_s(entry)
 
     #
     # Now check the entry as been replicated
     #
-    topology_m2.ms["master2"].simple_bind_s(DN_DM, PASSWORD)
-    topology_m2.ms["master1"].log.info("Try to retrieve %s from Master2" % ENTRY_DN)
+    topology_m2.ms["supplier2"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].log.info("Try to retrieve %s from Supplier2" % ENTRY_DN)
     replication_check(topology_m2)
-    ent = topology_m2.ms["master2"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
+    ent = topology_m2.ms["supplier2"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
     assert ent
-    # Now update the entry on Master2 (as DM because 47676 is possibly not fixed on M2)
-    topology_m2.ms["master1"].log.info("Update  %s on M2" % ENTRY_DN)
+    # Now update the entry on Supplier2 (as DM because 47676 is possibly not fixed on M2)
+    topology_m2.ms["supplier1"].log.info("Update  %s on M2" % ENTRY_DN)
     mod = [(ldap.MOD_REPLACE, 'description', b'test_add')]
-    topology_m2.ms["master2"].modify_s(ENTRY_DN, mod)
+    topology_m2.ms["supplier2"].modify_s(ENTRY_DN, mod)
 
-    topology_m2.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
     replication_check(topology_m2)
-    ent = topology_m2.ms["master1"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
+    ent = topology_m2.ms["supplier1"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)")
     assert ensure_str(ent.getValue('description')) == 'test_add'
 
 
 def test_ticket47676_reject_action(topology_m2):
-    topology_m2.ms["master1"].log.info("\n\n######################### REJECT ACTION ######################\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######################### REJECT ACTION ######################\n")
 
-    topology_m2.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
-    topology_m2.ms["master2"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier2"].simple_bind_s(DN_DM, PASSWORD)
 
-    # make master1 to refuse to push the schema if OC_NAME is present in consumer schema
+    # make supplier1 to refuse to push the schema if OC_NAME is present in consumer schema
     mod = [(ldap.MOD_ADD, 'schemaUpdateObjectclassReject', ensure_bytes('%s' % (OC_NAME)))]  # ACL + REPL
-    topology_m2.ms["master1"].modify_s(REPL_SCHEMA_POLICY_SUPPLIER, mod)
+    topology_m2.ms["supplier1"].modify_s(REPL_SCHEMA_POLICY_SUPPLIER, mod)
 
     # Restart is required to take into account that policy
-    topology_m2.ms["master1"].stop(timeout=10)
-    topology_m2.ms["master1"].start(timeout=10)
+    topology_m2.ms["supplier1"].stop(timeout=10)
+    topology_m2.ms["supplier1"].start(timeout=10)
 
     # Add a new OC on M1 so that schema CSN will change and M1 will try to push the schema
-    topology_m2.ms["master1"].log.info("Add %s on M1" % OC2_NAME)
+    topology_m2.ms["supplier1"].log.info("Add %s on M1" % OC2_NAME)
     new_oc = _oc_definition(OC2_OID_EXT, OC2_NAME, must=MUST, may=MAY)
-    topology_m2.ms["master1"].schema.add_schema('objectClasses', new_oc)
+    topology_m2.ms["supplier1"].schema.add_schema('objectClasses', new_oc)
 
     # Safety checking that the schema has been updated on M1
-    topology_m2.ms["master1"].log.info("Check %s is in M1" % OC2_NAME)
-    ent = topology_m2.ms["master1"].getEntry(SCHEMA_DN, ldap.SCOPE_BASE, "(objectclass=*)", ["objectclasses"])
+    topology_m2.ms["supplier1"].log.info("Check %s is in M1" % OC2_NAME)
+    ent = topology_m2.ms["supplier1"].getEntry(SCHEMA_DN, ldap.SCOPE_BASE, "(objectclass=*)", ["objectclasses"])
     assert ent.hasAttr('objectclasses')
     found = False
     for objectclass in ent.getValues('objectclasses'):
@@ -190,20 +190,20 @@ def test_ticket47676_reject_action(topology_m2):
     assert found
 
     # Do an update of M1 so that M1 will try to push the schema
-    topology_m2.ms["master1"].log.info("Update  %s on M1" % ENTRY_DN)
+    topology_m2.ms["supplier1"].log.info("Update  %s on M1" % ENTRY_DN)
     mod = [(ldap.MOD_REPLACE, 'description', b'test_reject')]
-    topology_m2.ms["master1"].modify_s(ENTRY_DN, mod)
+    topology_m2.ms["supplier1"].modify_s(ENTRY_DN, mod)
 
     # Check the replication occured and so also M1 attempted to push the schema
-    topology_m2.ms["master1"].log.info("Check updated %s on M2" % ENTRY_DN)
+    topology_m2.ms["supplier1"].log.info("Check updated %s on M2" % ENTRY_DN)
 
     replication_check(topology_m2)
-    ent = topology_m2.ms["master2"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)", ['description'])
+    ent = topology_m2.ms["supplier2"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)", ['description'])
     assert ensure_str(ent.getValue('description')) == 'test_reject'
 
     # Check that the schema has not been pushed
-    topology_m2.ms["master1"].log.info("Check %s is not in M2" % OC2_NAME)
-    ent = topology_m2.ms["master2"].getEntry(SCHEMA_DN, ldap.SCOPE_BASE, "(objectclass=*)", ["objectclasses"])
+    topology_m2.ms["supplier1"].log.info("Check %s is not in M2" % OC2_NAME)
+    ent = topology_m2.ms["supplier2"].getEntry(SCHEMA_DN, ldap.SCOPE_BASE, "(objectclass=*)", ["objectclasses"])
     assert ent.hasAttr('objectclasses')
     found = False
     for objectclass in ent.getValues('objectclasses'):
@@ -212,30 +212,30 @@ def test_ticket47676_reject_action(topology_m2):
             break
     assert not found
 
-    topology_m2.ms["master1"].log.info("\n\n######################### NO MORE REJECT ACTION ######################\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######################### NO MORE REJECT ACTION ######################\n")
 
-    # make master1 to do no specific action on OC_NAME
+    # make supplier1 to do no specific action on OC_NAME
     mod = [(ldap.MOD_DELETE, 'schemaUpdateObjectclassReject', ensure_bytes('%s' % (OC_NAME)))]  # ACL + REPL
-    topology_m2.ms["master1"].modify_s(REPL_SCHEMA_POLICY_SUPPLIER, mod)
+    topology_m2.ms["supplier1"].modify_s(REPL_SCHEMA_POLICY_SUPPLIER, mod)
 
     # Restart is required to take into account that policy
-    topology_m2.ms["master1"].stop(timeout=10)
-    topology_m2.ms["master1"].start(timeout=10)
+    topology_m2.ms["supplier1"].stop(timeout=10)
+    topology_m2.ms["supplier1"].start(timeout=10)
 
     # Do an update of M1 so that M1 will try to push the schema
-    topology_m2.ms["master1"].log.info("Update  %s on M1" % ENTRY_DN)
+    topology_m2.ms["supplier1"].log.info("Update  %s on M1" % ENTRY_DN)
     mod = [(ldap.MOD_REPLACE, 'description', b'test_no_more_reject')]
-    topology_m2.ms["master1"].modify_s(ENTRY_DN, mod)
+    topology_m2.ms["supplier1"].modify_s(ENTRY_DN, mod)
 
     # Check the replication occured and so also M1 attempted to push the schema
-    topology_m2.ms["master1"].log.info("Check updated %s on M2" % ENTRY_DN)
+    topology_m2.ms["supplier1"].log.info("Check updated %s on M2" % ENTRY_DN)
 
     replication_check(topology_m2)
-    ent = topology_m2.ms["master2"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)", ['description'])
+    ent = topology_m2.ms["supplier2"].getEntry(ENTRY_DN, ldap.SCOPE_BASE, "(objectclass=*)", ['description'])
     assert ensure_str(ent.getValue('description')) == 'test_no_more_reject'
     # Check that the schema has been pushed
-    topology_m2.ms["master1"].log.info("Check %s is in M2" % OC2_NAME)
-    ent = topology_m2.ms["master2"].getEntry(SCHEMA_DN, ldap.SCOPE_BASE, "(objectclass=*)", ["objectclasses"])
+    topology_m2.ms["supplier1"].log.info("Check %s is in M2" % OC2_NAME)
+    ent = topology_m2.ms["supplier2"].getEntry(SCHEMA_DN, ldap.SCOPE_BASE, "(objectclass=*)", ["objectclasses"])
     assert ent.hasAttr('objectclasses')
     found = False
     for objectclass in ent.getValues('objectclasses'):

+ 73 - 73
dirsrvtests/tests/tickets/ticket47721_test.py

@@ -81,9 +81,9 @@ def _chg_std_oc_defintion():
 
 def replication_check(topology_m2):
     repl = ReplicationManager(SUFFIX)
-    master1 = topology_m2.ms["master1"]
-    master2 = topology_m2.ms["master2"]
-    return repl.test_replication(master1, master2)
+    supplier1 = topology_m2.ms["supplier1"]
+    supplier2 = topology_m2.ms["supplier2"]
+    return repl.test_replication(supplier1, supplier2)
 
 def test_ticket47721_init(topology_m2):
     """
@@ -95,8 +95,8 @@ def test_ticket47721_init(topology_m2):
     """
 
     # entry used to bind with
-    topology_m2.ms["master1"].log.info("Add %s" % BIND_DN)
-    topology_m2.ms["master1"].add_s(Entry((BIND_DN, {
+    topology_m2.ms["supplier1"].log.info("Add %s" % BIND_DN)
+    topology_m2.ms["supplier1"].add_s(Entry((BIND_DN, {
         'objectclass': "top person".split(),
         'sn': BIND_NAME,
         'cn': BIND_NAME,
@@ -104,13 +104,13 @@ def test_ticket47721_init(topology_m2):
 
     # enable repl error logging
     mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', ensure_bytes(str(8192)))]  # REPL logging
-    topology_m2.ms["master1"].modify_s(DN_CONFIG, mod)
-    topology_m2.ms["master2"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier1"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier2"].modify_s(DN_CONFIG, mod)
 
     # add dummy entries
     for cpt in range(MAX_OTHERS):
         name = "%s%d" % (OTHER_NAME, cpt)
-        topology_m2.ms["master1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
+        topology_m2.ms["supplier1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
             'objectclass': "top person".split(),
             'sn': name,
             'cn': name})))
@@ -119,44 +119,44 @@ def test_ticket47721_init(topology_m2):
 def test_ticket47721_0(topology_m2):
     dn = "cn=%s0,%s" % (OTHER_NAME, SUFFIX)
     replication_check(topology_m2)
-    ent = topology_m2.ms["master2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
+    ent = topology_m2.ms["supplier2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
     assert ent
 
 
 def test_ticket47721_1(topology_m2):
     log.info('Running test 1...')
-    # topology_m2.ms["master1"].log.info("Attach debugger\n\n")
+    # topology_m2.ms["supplier1"].log.info("Attach debugger\n\n")
     # time.sleep(30)
 
     new = _add_custom_at_definition()
-    topology_m2.ms["master1"].log.info("Add (M2) %s " % new)
-    topology_m2.ms["master2"].schema.add_schema('attributetypes', new)
+    topology_m2.ms["supplier1"].log.info("Add (M2) %s " % new)
+    topology_m2.ms["supplier2"].schema.add_schema('attributetypes', new)
 
     new = _chg_std_at_defintion()
-    topology_m2.ms["master1"].log.info("Chg (M2) %s " % new)
-    topology_m2.ms["master2"].schema.add_schema('attributetypes', new)
+    topology_m2.ms["supplier1"].log.info("Chg (M2) %s " % new)
+    topology_m2.ms["supplier2"].schema.add_schema('attributetypes', new)
 
     new = _add_custom_oc_defintion()
-    topology_m2.ms["master1"].log.info("Add (M2) %s " % new)
-    topology_m2.ms["master2"].schema.add_schema('objectClasses', new)
+    topology_m2.ms["supplier1"].log.info("Add (M2) %s " % new)
+    topology_m2.ms["supplier2"].schema.add_schema('objectClasses', new)
 
     new = _chg_std_oc_defintion()
-    topology_m2.ms["master1"].log.info("Chg (M2) %s " % new)
-    topology_m2.ms["master2"].schema.add_schema('objectClasses', new)
+    topology_m2.ms["supplier1"].log.info("Chg (M2) %s " % new)
+    topology_m2.ms["supplier2"].schema.add_schema('objectClasses', new)
 
     mod = [(ldap.MOD_REPLACE, 'description', b'Hello world 1')]
     dn = "cn=%s0,%s" % (OTHER_NAME, SUFFIX)
-    topology_m2.ms["master2"].modify_s(dn, mod)
+    topology_m2.ms["supplier2"].modify_s(dn, mod)
 
     replication_check(topology_m2)
-    ent = topology_m2.ms["master1"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
+    ent = topology_m2.ms["supplier1"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
     assert ensure_str(ent.getValue('description')) == 'Hello world 1'
 
     time.sleep(2)
-    schema_csn_master1 = topology_m2.ms["master1"].schema.get_schema_csn()
-    schema_csn_master2 = topology_m2.ms["master2"].schema.get_schema_csn()
-    log.debug('Master 1 schemaCSN: %s' % schema_csn_master1)
-    log.debug('Master 2 schemaCSN: %s' % schema_csn_master2)
+    schema_csn_supplier1 = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    schema_csn_supplier2 = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    log.debug('Supplier 1 schemaCSN: %s' % schema_csn_supplier1)
+    log.debug('Supplier 2 schemaCSN: %s' % schema_csn_supplier2)
 
 
 def test_ticket47721_2(topology_m2):
@@ -164,27 +164,27 @@ def test_ticket47721_2(topology_m2):
 
     mod = [(ldap.MOD_REPLACE, 'description', b'Hello world 2')]
     dn = "cn=%s0,%s" % (OTHER_NAME, SUFFIX)
-    topology_m2.ms["master1"].modify_s(dn, mod)
+    topology_m2.ms["supplier1"].modify_s(dn, mod)
 
     replication_check(topology_m2)
-    ent = topology_m2.ms["master2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
+    ent = topology_m2.ms["supplier2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
     assert ensure_str(ent.getValue('description')) == 'Hello world 2'
 
     time.sleep(2)
-    schema_csn_master1 = topology_m2.ms["master1"].schema.get_schema_csn()
-    schema_csn_master2 = topology_m2.ms["master2"].schema.get_schema_csn()
-    log.debug('Master 1 schemaCSN: %s' % schema_csn_master1)
-    log.debug('Master 2 schemaCSN: %s' % schema_csn_master2)
-    if schema_csn_master1 != schema_csn_master2:
+    schema_csn_supplier1 = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    schema_csn_supplier2 = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    log.debug('Supplier 1 schemaCSN: %s' % schema_csn_supplier1)
+    log.debug('Supplier 2 schemaCSN: %s' % schema_csn_supplier2)
+    if schema_csn_supplier1 != schema_csn_supplier2:
         # We need to give the server a little more time, then check it again
         log.info('Schema CSNs are not in sync yet: m1 (%s) vs m2 (%s), wait a little...'
-                 % (schema_csn_master1, schema_csn_master2))
+                 % (schema_csn_supplier1, schema_csn_supplier2))
         time.sleep(SLEEP_INTERVAL)
-        schema_csn_master1 = topology_m2.ms["master1"].schema.get_schema_csn()
-        schema_csn_master2 = topology_m2.ms["master2"].schema.get_schema_csn()
+        schema_csn_supplier1 = topology_m2.ms["supplier1"].schema.get_schema_csn()
+        schema_csn_supplier2 = topology_m2.ms["supplier2"].schema.get_schema_csn()
 
-    assert schema_csn_master1 is not None
-    assert schema_csn_master1 == schema_csn_master2
+    assert schema_csn_supplier1 is not None
+    assert schema_csn_supplier1 == schema_csn_supplier2
 
 
 def test_ticket47721_3(topology_m2):
@@ -195,44 +195,44 @@ def test_ticket47721_3(topology_m2):
     log.info('Running test 3...')
 
     # stop RA M2->M1, so that M1 can only learn being a supplier
-    ents = topology_m2.ms["master2"].agreement.list(suffix=SUFFIX)
+    ents = topology_m2.ms["supplier2"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
-    topology_m2.ms["master2"].agreement.pause(ents[0].dn)
+    topology_m2.ms["supplier2"].agreement.pause(ents[0].dn)
 
     new = _add_custom_at_definition('ATtest3')
-    topology_m2.ms["master1"].log.info("Update schema (M2) %s " % new)
-    topology_m2.ms["master2"].schema.add_schema('attributetypes', new)
+    topology_m2.ms["supplier1"].log.info("Update schema (M2) %s " % new)
+    topology_m2.ms["supplier2"].schema.add_schema('attributetypes', new)
     time.sleep(1)
 
     new = _add_custom_oc_defintion('OCtest3')
-    topology_m2.ms["master1"].log.info("Update schema (M2) %s " % new)
-    topology_m2.ms["master2"].schema.add_schema('objectClasses', new)
+    topology_m2.ms["supplier1"].log.info("Update schema (M2) %s " % new)
+    topology_m2.ms["supplier2"].schema.add_schema('objectClasses', new)
     time.sleep(1)
 
     mod = [(ldap.MOD_REPLACE, 'description', b'Hello world 3')]
     dn = "cn=%s0,%s" % (OTHER_NAME, SUFFIX)
-    topology_m2.ms["master1"].modify_s(dn, mod)
+    topology_m2.ms["supplier1"].modify_s(dn, mod)
 
     replication_check(topology_m2)
-    ent = topology_m2.ms["master2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
+    ent = topology_m2.ms["supplier2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
     assert ensure_str(ent.getValue('description')) == 'Hello world 3'
 
     time.sleep(5)
-    schema_csn_master1 = topology_m2.ms["master1"].schema.get_schema_csn()
-    schema_csn_master2 = topology_m2.ms["master2"].schema.get_schema_csn()
-    log.debug('Master 1 schemaCSN: %s' % schema_csn_master1)
-    log.debug('Master 2 schemaCSN: %s' % schema_csn_master2)
-    if schema_csn_master1 == schema_csn_master2:
+    schema_csn_supplier1 = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    schema_csn_supplier2 = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    log.debug('Supplier 1 schemaCSN: %s' % schema_csn_supplier1)
+    log.debug('Supplier 2 schemaCSN: %s' % schema_csn_supplier2)
+    if schema_csn_supplier1 == schema_csn_supplier2:
         # We need to give the server a little more time, then check it again
         log.info('Schema CSNs are not in sync yet: m1 (%s) vs m2 (%s), wait a little...'
-                 % (schema_csn_master1, schema_csn_master2))
+                 % (schema_csn_supplier1, schema_csn_supplier2))
         time.sleep(SLEEP_INTERVAL)
-        schema_csn_master1 = topology_m2.ms["master1"].schema.get_schema_csn()
-        schema_csn_master2 = topology_m2.ms["master2"].schema.get_schema_csn()
+        schema_csn_supplier1 = topology_m2.ms["supplier1"].schema.get_schema_csn()
+        schema_csn_supplier2 = topology_m2.ms["supplier2"].schema.get_schema_csn()
 
-    assert schema_csn_master1 is not None
+    assert schema_csn_supplier1 is not None
     # schema csn on M2 is larger that on M1. M1 only took the new definitions
-    assert schema_csn_master1 != schema_csn_master2
+    assert schema_csn_supplier1 != schema_csn_supplier2
 
 
 def test_ticket47721_4(topology_m2):
@@ -245,45 +245,45 @@ def test_ticket47721_4(topology_m2):
     log.info('Running test 4...')
 
     new = _add_custom_at_definition('ATtest4')
-    topology_m2.ms["master1"].log.info("Update schema (M1) %s " % new)
-    topology_m2.ms["master1"].schema.add_schema('attributetypes', new)
+    topology_m2.ms["supplier1"].log.info("Update schema (M1) %s " % new)
+    topology_m2.ms["supplier1"].schema.add_schema('attributetypes', new)
 
     new = _add_custom_oc_defintion('OCtest4')
-    topology_m2.ms["master1"].log.info("Update schema (M1) %s " % new)
-    topology_m2.ms["master1"].schema.add_schema('objectClasses', new)
+    topology_m2.ms["supplier1"].log.info("Update schema (M1) %s " % new)
+    topology_m2.ms["supplier1"].schema.add_schema('objectClasses', new)
 
-    topology_m2.ms["master1"].log.info("trigger replication M1->M2: to update the schema")
+    topology_m2.ms["supplier1"].log.info("trigger replication M1->M2: to update the schema")
     mod = [(ldap.MOD_REPLACE, 'description', b'Hello world 4')]
     dn = "cn=%s0,%s" % (OTHER_NAME, SUFFIX)
-    topology_m2.ms["master1"].modify_s(dn, mod)
+    topology_m2.ms["supplier1"].modify_s(dn, mod)
 
     replication_check(topology_m2)
-    ent = topology_m2.ms["master2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
+    ent = topology_m2.ms["supplier2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
     assert ensure_str(ent.getValue('description')) == 'Hello world 4'
 
-    topology_m2.ms["master1"].log.info("trigger replication M1->M2: to push the schema")
+    topology_m2.ms["supplier1"].log.info("trigger replication M1->M2: to push the schema")
     mod = [(ldap.MOD_REPLACE, 'description', b'Hello world 5')]
     dn = "cn=%s0,%s" % (OTHER_NAME, SUFFIX)
-    topology_m2.ms["master1"].modify_s(dn, mod)
+    topology_m2.ms["supplier1"].modify_s(dn, mod)
 
     replication_check(topology_m2)
-    ent = topology_m2.ms["master2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
+    ent = topology_m2.ms["supplier2"].getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
     assert ensure_str(ent.getValue('description')) == 'Hello world 5'
 
     time.sleep(2)
-    schema_csn_master1 = topology_m2.ms["master1"].schema.get_schema_csn()
-    schema_csn_master2 = topology_m2.ms["master2"].schema.get_schema_csn()
-    log.debug('Master 1 schemaCSN: %s' % schema_csn_master1)
-    log.debug('Master 2 schemaCSN: %s' % schema_csn_master2)
-    if schema_csn_master1 != schema_csn_master2:
+    schema_csn_supplier1 = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    schema_csn_supplier2 = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    log.debug('Supplier 1 schemaCSN: %s' % schema_csn_supplier1)
+    log.debug('Supplier 2 schemaCSN: %s' % schema_csn_supplier2)
+    if schema_csn_supplier1 != schema_csn_supplier2:
         # We need to give the server a little more time, then check it again
         log.info('Schema CSNs are incorrectly in sync, wait a little...')
         time.sleep(SLEEP_INTERVAL)
-        schema_csn_master1 = topology_m2.ms["master1"].schema.get_schema_csn()
-        schema_csn_master2 = topology_m2.ms["master2"].schema.get_schema_csn()
+        schema_csn_supplier1 = topology_m2.ms["supplier1"].schema.get_schema_csn()
+        schema_csn_supplier2 = topology_m2.ms["supplier2"].schema.get_schema_csn()
 
-    assert schema_csn_master1 is not None
-    assert schema_csn_master1 == schema_csn_master2
+    assert schema_csn_supplier1 is not None
+    assert schema_csn_supplier1 == schema_csn_supplier2
 
 
 if __name__ == '__main__':

+ 14 - 14
dirsrvtests/tests/tickets/ticket47781_test.py

@@ -14,7 +14,7 @@ from lib389.topologies import topology_st
 from lib389.replica import ReplicationManager
 
 from lib389._constants import (defaultProperties, DEFAULT_SUFFIX, ReplicaRole,
-                               REPLICAID_MASTER_1, REPLICATION_BIND_DN, REPLICATION_BIND_PW,
+                               REPLICAID_SUPPLIER_1, REPLICATION_BIND_DN, REPLICATION_BIND_PW,
                                REPLICATION_BIND_METHOD, REPLICATION_TRANSPORT, RA_NAME,
                                RA_BINDDN, RA_BINDPW, RA_METHOD, RA_TRANSPORT_PROT)
 
@@ -31,9 +31,9 @@ def test_ticket47781(topology_st):
 
     log.info('Testing Ticket 47781 - Testing for deadlock after importing LDIF with replication data')
 
-    master = topology_st.standalone
+    supplier = topology_st.standalone
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    repl.create_first_master(master)
+    repl.create_first_supplier(supplier)
 
     properties = {RA_NAME: r'meTo_$host:$port',
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
@@ -41,8 +41,8 @@ def test_ticket47781(topology_st):
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
     # The agreement should point to a server that does NOT exist (invalid port)
-    repl_agreement = master.agreement.create(suffix=DEFAULT_SUFFIX,
-                                             host=master.host,
+    repl_agreement = supplier.agreement.create(suffix=DEFAULT_SUFFIX,
+                                             host=supplier.host,
                                              port=5555,
                                              properties=properties)
 
@@ -51,12 +51,12 @@ def test_ticket47781(topology_st):
     #
     log.info('Adding two entries...')
 
-    master.add_s(Entry(('cn=entry1,dc=example,dc=com', {
+    supplier.add_s(Entry(('cn=entry1,dc=example,dc=com', {
         'objectclass': 'top person'.split(),
         'sn': 'user',
         'cn': 'entry1'})))
 
-    master.add_s(Entry(('cn=entry2,dc=example,dc=com', {
+    supplier.add_s(Entry(('cn=entry2,dc=example,dc=com', {
         'objectclass': 'top person'.split(),
         'sn': 'user',
         'cn': 'entry2'})))
@@ -66,21 +66,21 @@ def test_ticket47781(topology_st):
     #
     log.info('Exporting replication ldif...')
     args = {EXPORT_REPL_INFO: True}
-    exportTask = Tasks(master)
+    exportTask = Tasks(supplier)
     exportTask.exportLDIF(DEFAULT_SUFFIX, None, "/tmp/export.ldif", args)
 
     #
     # Restart the server
     #
     log.info('Restarting server...')
-    master.stop()
-    master.start()
+    supplier.stop()
+    supplier.start()
 
     #
     # Import the ldif
     #
     log.info('Import replication LDIF file...')
-    importTask = Tasks(master)
+    importTask = Tasks(supplier)
     args = {TASK_WAIT: True}
     importTask.importLDIF(DEFAULT_SUFFIX, None, "/tmp/export.ldif", args)
     os.remove("/tmp/export.ldif")
@@ -89,9 +89,9 @@ def test_ticket47781(topology_st):
     # Search for tombstones - we should not hang/timeout
     #
     log.info('Search for tombstone entries(should find one and not hang)...')
-    master.set_option(ldap.OPT_NETWORK_TIMEOUT, 5)
-    master.set_option(ldap.OPT_TIMEOUT, 5)
-    entries = master.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=nsTombstone')
+    supplier.set_option(ldap.OPT_NETWORK_TIMEOUT, 5)
+    supplier.set_option(ldap.OPT_TIMEOUT, 5)
+    entries = supplier.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=nsTombstone')
     if not entries:
         log.fatal('Search failed to find any entries.')
         assert PR_False

+ 62 - 62
dirsrvtests/tests/tickets/ticket47787_test.py

@@ -66,22 +66,22 @@ def _bind_normal(server):
 
 
 def _header(topology_m2, label):
-    topology_m2.ms["master1"].log.info("\n\n###############################################")
-    topology_m2.ms["master1"].log.info("#######")
-    topology_m2.ms["master1"].log.info("####### %s" % label)
-    topology_m2.ms["master1"].log.info("#######")
-    topology_m2.ms["master1"].log.info("###############################################")
+    topology_m2.ms["supplier1"].log.info("\n\n###############################################")
+    topology_m2.ms["supplier1"].log.info("#######")
+    topology_m2.ms["supplier1"].log.info("####### %s" % label)
+    topology_m2.ms["supplier1"].log.info("#######")
+    topology_m2.ms["supplier1"].log.info("###############################################")
 
 
 def _status_entry_both_server(topology_m2, name=None, desc=None, debug=True):
     if not name:
         return
-    topology_m2.ms["master1"].log.info("\n\n######################### Tombstone on M1 ######################\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######################### Tombstone on M1 ######################\n")
     attr = 'description'
     found = False
     attempt = 0
     while not found and attempt < 10:
-        ent_m1 = _find_tombstone(topology_m2.ms["master1"], SUFFIX, 'sn', name)
+        ent_m1 = _find_tombstone(topology_m2.ms["supplier1"], SUFFIX, 'sn', name)
         if attr in ent_m1.getAttrs():
             found = True
         else:
@@ -89,40 +89,40 @@ def _status_entry_both_server(topology_m2, name=None, desc=None, debug=True):
             attempt = attempt + 1
     assert ent_m1
 
-    topology_m2.ms["master1"].log.info("\n\n######################### Tombstone on M2 ######################\n")
-    ent_m2 = _find_tombstone(topology_m2.ms["master2"], SUFFIX, 'sn', name)
+    topology_m2.ms["supplier1"].log.info("\n\n######################### Tombstone on M2 ######################\n")
+    ent_m2 = _find_tombstone(topology_m2.ms["supplier2"], SUFFIX, 'sn', name)
     assert ent_m2
 
-    topology_m2.ms["master1"].log.info("\n\n######################### Description ######################\n%s\n" % desc)
-    topology_m2.ms["master1"].log.info("M1 only\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######################### Description ######################\n%s\n" % desc)
+    topology_m2.ms["supplier1"].log.info("M1 only\n")
     for attr in ent_m1.getAttrs():
 
         if not debug:
             assert attr in ent_m2.getAttrs()
 
         if not attr in ent_m2.getAttrs():
-            topology_m2.ms["master1"].log.info("    %s" % attr)
+            topology_m2.ms["supplier1"].log.info("    %s" % attr)
             for val in ent_m1.getValues(attr):
-                topology_m2.ms["master1"].log.info("        %s" % val)
+                topology_m2.ms["supplier1"].log.info("        %s" % val)
 
-    topology_m2.ms["master1"].log.info("M2 only\n")
+    topology_m2.ms["supplier1"].log.info("M2 only\n")
     for attr in ent_m2.getAttrs():
 
         if not debug:
             assert attr in ent_m1.getAttrs()
 
         if not attr in ent_m1.getAttrs():
-            topology_m2.ms["master1"].log.info("    %s" % attr)
+            topology_m2.ms["supplier1"].log.info("    %s" % attr)
             for val in ent_m2.getValues(attr):
-                topology_m2.ms["master1"].log.info("        %s" % val)
+                topology_m2.ms["supplier1"].log.info("        %s" % val)
 
-    topology_m2.ms["master1"].log.info("M1 differs M2\n")
+    topology_m2.ms["supplier1"].log.info("M1 differs M2\n")
 
     if not debug:
         assert ent_m1.dn == ent_m2.dn
 
     if ent_m1.dn != ent_m2.dn:
-        topology_m2.ms["master1"].log.info("    M1[dn] = %s\n    M2[dn] = %s" % (ent_m1.dn, ent_m2.dn))
+        topology_m2.ms["supplier1"].log.info("    M1[dn] = %s\n    M2[dn] = %s" % (ent_m1.dn, ent_m2.dn))
 
     for attr1 in ent_m1.getAttrs():
         if attr1 in ent_m2.getAttrs():
@@ -137,7 +137,7 @@ def _status_entry_both_server(topology_m2, name=None, desc=None, debug=True):
                     assert found
 
                 if not found:
-                    topology_m2.ms["master1"].log.info("    M1[%s] = %s" % (attr1, val1))
+                    topology_m2.ms["supplier1"].log.info("    M1[%s] = %s" % (attr1, val1))
 
     for attr2 in ent_m2.getAttrs():
         if attr2 in ent_m1.getAttrs():
@@ -152,29 +152,29 @@ def _status_entry_both_server(topology_m2, name=None, desc=None, debug=True):
                     assert found
 
                 if not found:
-                    topology_m2.ms["master1"].log.info("    M2[%s] = %s" % (attr2, val2))
+                    topology_m2.ms["supplier1"].log.info("    M2[%s] = %s" % (attr2, val2))
 
 
 def _pause_RAs(topology_m2):
-    topology_m2.ms["master1"].log.info("\n\n######################### Pause RA M1<->M2 ######################\n")
-    ents = topology_m2.ms["master1"].agreement.list(suffix=SUFFIX)
+    topology_m2.ms["supplier1"].log.info("\n\n######################### Pause RA M1<->M2 ######################\n")
+    ents = topology_m2.ms["supplier1"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
-    topology_m2.ms["master1"].agreement.pause(ents[0].dn)
+    topology_m2.ms["supplier1"].agreement.pause(ents[0].dn)
 
-    ents = topology_m2.ms["master2"].agreement.list(suffix=SUFFIX)
+    ents = topology_m2.ms["supplier2"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
-    topology_m2.ms["master2"].agreement.pause(ents[0].dn)
+    topology_m2.ms["supplier2"].agreement.pause(ents[0].dn)
 
 
 def _resume_RAs(topology_m2):
-    topology_m2.ms["master1"].log.info("\n\n######################### resume RA M1<->M2 ######################\n")
-    ents = topology_m2.ms["master1"].agreement.list(suffix=SUFFIX)
+    topology_m2.ms["supplier1"].log.info("\n\n######################### resume RA M1<->M2 ######################\n")
+    ents = topology_m2.ms["supplier1"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
-    topology_m2.ms["master1"].agreement.resume(ents[0].dn)
+    topology_m2.ms["supplier1"].agreement.resume(ents[0].dn)
 
-    ents = topology_m2.ms["master2"].agreement.list(suffix=SUFFIX)
+    ents = topology_m2.ms["supplier2"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
-    topology_m2.ms["master2"].agreement.resume(ents[0].dn)
+    topology_m2.ms["supplier2"].agreement.resume(ents[0].dn)
 
 
 def _find_tombstone(instance, base, attr, value):
@@ -261,24 +261,24 @@ def _check_replication(topology_m2, entry_dn):
     # prepare the filter to retrieve the entry
     filt = entry_dn.split(',')[0]
 
-    topology_m2.ms["master1"].log.info("\n######################### Check replicat M1->M2 ######################\n")
+    topology_m2.ms["supplier1"].log.info("\n######################### Check replicat M1->M2 ######################\n")
     loop = 0
     while loop <= 10:
         attr = 'description'
         value = 'test_value_%d' % loop
         mod = [(ldap.MOD_REPLACE, attr, ensure_bytes(value))]
-        topology_m2.ms["master1"].modify_s(entry_dn, mod)
-        _check_mod_received(topology_m2.ms["master2"], SUFFIX, filt, attr, value)
+        topology_m2.ms["supplier1"].modify_s(entry_dn, mod)
+        _check_mod_received(topology_m2.ms["supplier2"], SUFFIX, filt, attr, value)
         loop += 1
 
-    topology_m2.ms["master1"].log.info("\n######################### Check replicat M2->M1 ######################\n")
+    topology_m2.ms["supplier1"].log.info("\n######################### Check replicat M2->M1 ######################\n")
     loop = 0
     while loop <= 10:
         attr = 'description'
         value = 'test_value_%d' % loop
         mod = [(ldap.MOD_REPLACE, attr, ensure_bytes(value))]
-        topology_m2.ms["master2"].modify_s(entry_dn, mod)
-        _check_mod_received(topology_m2.ms["master1"], SUFFIX, filt, attr, value)
+        topology_m2.ms["supplier2"].modify_s(entry_dn, mod)
+        _check_mod_received(topology_m2.ms["supplier1"], SUFFIX, filt, attr, value)
         loop += 1
 
 
@@ -291,39 +291,39 @@ def test_ticket47787_init(topology_m2):
 
     """
 
-    topology_m2.ms["master1"].log.info("\n\n######################### INITIALIZATION ######################\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######################### INITIALIZATION ######################\n")
 
     # entry used to bind with
-    topology_m2.ms["master1"].log.info("Add %s" % BIND_DN)
-    topology_m2.ms["master1"].add_s(Entry((BIND_DN, {
+    topology_m2.ms["supplier1"].log.info("Add %s" % BIND_DN)
+    topology_m2.ms["supplier1"].add_s(Entry((BIND_DN, {
         'objectclass': "top person".split(),
         'sn': BIND_CN,
         'cn': BIND_CN,
         'userpassword': BIND_PW})))
 
     # DIT for staging
-    topology_m2.ms["master1"].log.info("Add %s" % STAGING_DN)
-    topology_m2.ms["master1"].add_s(Entry((STAGING_DN, {
+    topology_m2.ms["supplier1"].log.info("Add %s" % STAGING_DN)
+    topology_m2.ms["supplier1"].add_s(Entry((STAGING_DN, {
         'objectclass': "top organizationalRole".split(),
         'cn': STAGING_CN,
         'description': "staging DIT"})))
 
     # DIT for production
-    topology_m2.ms["master1"].log.info("Add %s" % PRODUCTION_DN)
-    topology_m2.ms["master1"].add_s(Entry((PRODUCTION_DN, {
+    topology_m2.ms["supplier1"].log.info("Add %s" % PRODUCTION_DN)
+    topology_m2.ms["supplier1"].add_s(Entry((PRODUCTION_DN, {
         'objectclass': "top organizationalRole".split(),
         'cn': PRODUCTION_CN,
         'description': "production DIT"})))
 
     # enable replication error logging
     mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', b'8192')]
-    topology_m2.ms["master1"].modify_s(DN_CONFIG, mod)
-    topology_m2.ms["master2"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier1"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier2"].modify_s(DN_CONFIG, mod)
 
     # add dummy entries in the staging DIT
     for cpt in range(MAX_ACCOUNTS):
         name = "%s%d" % (NEW_ACCOUNT, cpt)
-        topology_m2.ms["master1"].add_s(Entry(("cn=%s,%s" % (name, STAGING_DN), {
+        topology_m2.ms["supplier1"].add_s(Entry(("cn=%s,%s" % (name, STAGING_DN), {
             'objectclass': "top person".split(),
             'sn': name,
             'cn': name})))
@@ -340,8 +340,8 @@ def test_ticket47787_2(topology_m2):
     '''
 
     _header(topology_m2, "test_ticket47787_2")
-    _bind_manager(topology_m2.ms["master1"])
-    _bind_manager(topology_m2.ms["master2"])
+    _bind_manager(topology_m2.ms["supplier1"])
+    _bind_manager(topology_m2.ms["supplier2"])
 
     # entry to test the replication is still working
     name = "%s%d" % (NEW_ACCOUNT, MAX_ACCOUNTS - 1)
@@ -362,34 +362,34 @@ def test_ticket47787_2(topology_m2):
     entry_dn = "%s,%s" % (rdn, STAGING_DN)
 
     # created on M1, wait the entry exists on M2
-    _check_entry_exists(topology_m2.ms["master2"], entry_dn)
-    _check_entry_exists(topology_m2.ms["master2"], testentry_dn)
+    _check_entry_exists(topology_m2.ms["supplier2"], entry_dn)
+    _check_entry_exists(topology_m2.ms["supplier2"], testentry_dn)
 
     _pause_RAs(topology_m2)
 
     # Delete 'entry_dn' on M1.
     # dummy update is only have a first CSN before the DEL
     # else the DEL will be in min_csn RUV and make diagnostic a bit more complex
-    _mod_entry(topology_m2.ms["master1"], testentry2_dn, attr, 'dummy')
-    _delete_entry(topology_m2.ms["master1"], entry_dn, name)
-    _mod_entry(topology_m2.ms["master1"], testentry2_dn, attr, value)
+    _mod_entry(topology_m2.ms["supplier1"], testentry2_dn, attr, 'dummy')
+    _delete_entry(topology_m2.ms["supplier1"], entry_dn, name)
+    _mod_entry(topology_m2.ms["supplier1"], testentry2_dn, attr, value)
 
     time.sleep(1)  # important to have MOD.csn != DEL.csn
 
     # MOD 'entry_dn' on M1.
     # dummy update is only have a first CSN before the MOD entry_dn
     # else the DEL will be in min_csn RUV and make diagnostic a bit more complex
-    _mod_entry(topology_m2.ms["master2"], testentry_dn, attr, 'dummy')
-    _mod_entry(topology_m2.ms["master2"], entry_dn, attr, value)
-    _mod_entry(topology_m2.ms["master2"], testentry_dn, attr, value)
+    _mod_entry(topology_m2.ms["supplier2"], testentry_dn, attr, 'dummy')
+    _mod_entry(topology_m2.ms["supplier2"], entry_dn, attr, value)
+    _mod_entry(topology_m2.ms["supplier2"], testentry_dn, attr, value)
 
     _resume_RAs(topology_m2)
 
-    topology_m2.ms["master1"].log.info(
+    topology_m2.ms["supplier1"].log.info(
         "\n\n######################### Check DEL replicated on M2 ######################\n")
     loop = 0
     while loop <= 10:
-        ent = _find_tombstone(topology_m2.ms["master2"], SUFFIX, 'sn', name)
+        ent = _find_tombstone(topology_m2.ms["supplier2"], SUFFIX, 'sn', name)
         if ent:
             break
         time.sleep(1)
@@ -399,18 +399,18 @@ def test_ticket47787_2(topology_m2):
 
     # the following checks are not necessary
     # as this bug is only for failing replicated MOD (entry_dn) on M1
-    # _check_mod_received(topology_m2.ms["master1"], SUFFIX, "(%s)" % (test_rdn), attr, value)
-    # _check_mod_received(topology_m2.ms["master2"], SUFFIX, "(%s)" % (test2_rdn), attr, value)
+    # _check_mod_received(topology_m2.ms["supplier1"], SUFFIX, "(%s)" % (test_rdn), attr, value)
+    # _check_mod_received(topology_m2.ms["supplier2"], SUFFIX, "(%s)" % (test2_rdn), attr, value)
     #
     # _check_replication(topology_m2, testentry_dn)
 
     _status_entry_both_server(topology_m2, name=name, desc="DEL M1 - MOD M2", debug=DEBUG_FLAG)
 
-    topology_m2.ms["master1"].log.info(
+    topology_m2.ms["supplier1"].log.info(
         "\n\n######################### Check MOD replicated on M1 ######################\n")
     loop = 0
     while loop <= 10:
-        ent = _find_tombstone(topology_m2.ms["master1"], SUFFIX, 'sn', name)
+        ent = _find_tombstone(topology_m2.ms["supplier1"], SUFFIX, 'sn', name)
         if ent:
             break
         time.sleep(1)

+ 63 - 63
dirsrvtests/tests/tickets/ticket47869MMR_test.py

@@ -32,9 +32,9 @@ BIND_PW = 'password'
 
 def replication_check(topology_m2):
     repl = ReplicationManager(SUFFIX)
-    master1 = topology_m2.ms["master1"]
-    master2 = topology_m2.ms["master2"]
-    return repl.test_replication(master1, master2)
+    supplier1 = topology_m2.ms["supplier1"]
+    supplier2 = topology_m2.ms["supplier2"]
+    return repl.test_replication(supplier1, supplier2)
 
 def test_ticket47869_init(topology_m2):
     """
@@ -44,153 +44,153 @@ def test_ticket47869_init(topology_m2):
     """
     # enable acl error logging
     mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', ensure_bytes(str(8192)))]  # REPL
-    topology_m2.ms["master1"].modify_s(DN_CONFIG, mod)
-    topology_m2.ms["master2"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier1"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier2"].modify_s(DN_CONFIG, mod)
 
     # entry used to bind with
-    topology_m2.ms["master1"].log.info("Add %s" % BIND_DN)
-    topology_m2.ms["master1"].add_s(Entry((BIND_DN, {
+    topology_m2.ms["supplier1"].log.info("Add %s" % BIND_DN)
+    topology_m2.ms["supplier1"].add_s(Entry((BIND_DN, {
         'objectclass': "top person".split(),
         'sn': BIND_NAME,
         'cn': BIND_NAME,
         'userpassword': BIND_PW})))
     replication_check(topology_m2)
-    ent = topology_m2.ms["master2"].getEntry(BIND_DN, ldap.SCOPE_BASE, "(objectclass=*)")
+    ent = topology_m2.ms["supplier2"].getEntry(BIND_DN, ldap.SCOPE_BASE, "(objectclass=*)")
     assert ent
     # keep anonymous ACI for use 'read-search' aci in SEARCH test
     ACI_ANONYMOUS = "(targetattr!=\"userPassword\")(version 3.0; acl \"Enable anonymous access\"; allow (read, search, compare) userdn=\"ldap:///anyone\";)"
     mod = [(ldap.MOD_REPLACE, 'aci', ensure_bytes(ACI_ANONYMOUS))]
-    topology_m2.ms["master1"].modify_s(SUFFIX, mod)
-    topology_m2.ms["master2"].modify_s(SUFFIX, mod)
+    topology_m2.ms["supplier1"].modify_s(SUFFIX, mod)
+    topology_m2.ms["supplier2"].modify_s(SUFFIX, mod)
 
     # add entries
     for cpt in range(MAX_ENTRIES):
         name = "%s%d" % (ENTRY_NAME, cpt)
         mydn = "cn=%s,%s" % (name, SUFFIX)
-        topology_m2.ms["master1"].add_s(Entry((mydn,
+        topology_m2.ms["supplier1"].add_s(Entry((mydn,
                                                {'objectclass': "top person".split(),
                                                 'sn': name,
                                                 'cn': name})))
         replication_check(topology_m2)
-        ent = topology_m2.ms["master2"].getEntry(mydn, ldap.SCOPE_BASE, "(objectclass=*)")
+        ent = topology_m2.ms["supplier2"].getEntry(mydn, ldap.SCOPE_BASE, "(objectclass=*)")
         assert ent
 
 def test_ticket47869_check(topology_m2):
     '''
-    On Master 1 and 2:
+    On Supplier 1 and 2:
       Bind as Directory Manager.
       Search all specifying nscpEntryWsi in the attribute list.
       Check nscpEntryWsi is returned.
-    On Master 1 and 2:
+    On Supplier 1 and 2:
       Bind as Bind Entry.
       Search all specifying nscpEntryWsi in the attribute list.
       Check nscpEntryWsi is not returned.
-    On Master 1 and 2:
+    On Supplier 1 and 2:
       Bind as anonymous.
       Search all specifying nscpEntryWsi in the attribute list.
       Check nscpEntryWsi is not returned.
     '''
-    topology_m2.ms["master1"].log.info("\n\n######################### CHECK nscpentrywsi ######################\n")
+    topology_m2.ms["supplier1"].log.info("\n\n######################### CHECK nscpentrywsi ######################\n")
 
-    topology_m2.ms["master1"].log.info("##### Master1: Bind as %s #####" % DN_DM)
-    topology_m2.ms["master1"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier1"].log.info("##### Supplier1: Bind as %s #####" % DN_DM)
+    topology_m2.ms["supplier1"].simple_bind_s(DN_DM, PASSWORD)
 
-    topology_m2.ms["master1"].log.info("Master1: Calling search_ext...")
-    msgid = topology_m2.ms["master1"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
+    topology_m2.ms["supplier1"].log.info("Supplier1: Calling search_ext...")
+    msgid = topology_m2.ms["supplier1"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
     nscpentrywsicnt = 0
-    rtype, rdata, rmsgid = topology_m2.ms["master1"].result2(msgid)
-    topology_m2.ms["master1"].log.info("%d results" % len(rdata))
+    rtype, rdata, rmsgid = topology_m2.ms["supplier1"].result2(msgid)
+    topology_m2.ms["supplier1"].log.info("%d results" % len(rdata))
 
-    topology_m2.ms["master1"].log.info("Results:")
+    topology_m2.ms["supplier1"].log.info("Results:")
     for dn, attrs in rdata:
-        topology_m2.ms["master1"].log.info("dn: %s" % dn)
+        topology_m2.ms["supplier1"].log.info("dn: %s" % dn)
         if 'nscpentrywsi' in attrs:
             nscpentrywsicnt += 1
 
-    topology_m2.ms["master1"].log.info("Master1: count of nscpentrywsi: %d" % nscpentrywsicnt)
+    topology_m2.ms["supplier1"].log.info("Supplier1: count of nscpentrywsi: %d" % nscpentrywsicnt)
 
-    topology_m2.ms["master2"].log.info("##### Master2: Bind as %s #####" % DN_DM)
-    topology_m2.ms["master2"].simple_bind_s(DN_DM, PASSWORD)
+    topology_m2.ms["supplier2"].log.info("##### Supplier2: Bind as %s #####" % DN_DM)
+    topology_m2.ms["supplier2"].simple_bind_s(DN_DM, PASSWORD)
 
-    topology_m2.ms["master2"].log.info("Master2: Calling search_ext...")
-    msgid = topology_m2.ms["master2"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
+    topology_m2.ms["supplier2"].log.info("Supplier2: Calling search_ext...")
+    msgid = topology_m2.ms["supplier2"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
     nscpentrywsicnt = 0
-    rtype, rdata, rmsgid = topology_m2.ms["master2"].result2(msgid)
-    topology_m2.ms["master2"].log.info("%d results" % len(rdata))
+    rtype, rdata, rmsgid = topology_m2.ms["supplier2"].result2(msgid)
+    topology_m2.ms["supplier2"].log.info("%d results" % len(rdata))
 
-    topology_m2.ms["master2"].log.info("Results:")
+    topology_m2.ms["supplier2"].log.info("Results:")
     for dn, attrs in rdata:
-        topology_m2.ms["master2"].log.info("dn: %s" % dn)
+        topology_m2.ms["supplier2"].log.info("dn: %s" % dn)
         if 'nscpentrywsi' in attrs:
             nscpentrywsicnt += 1
 
-    topology_m2.ms["master2"].log.info("Master2: count of nscpentrywsi: %d" % nscpentrywsicnt)
+    topology_m2.ms["supplier2"].log.info("Supplier2: count of nscpentrywsi: %d" % nscpentrywsicnt)
 
     # bind as bind_entry
-    topology_m2.ms["master1"].log.info("##### Master1: Bind as %s #####" % BIND_DN)
-    topology_m2.ms["master1"].simple_bind_s(BIND_DN, BIND_PW)
+    topology_m2.ms["supplier1"].log.info("##### Supplier1: Bind as %s #####" % BIND_DN)
+    topology_m2.ms["supplier1"].simple_bind_s(BIND_DN, BIND_PW)
 
-    topology_m2.ms["master1"].log.info("Master1: Calling search_ext...")
-    msgid = topology_m2.ms["master1"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
+    topology_m2.ms["supplier1"].log.info("Supplier1: Calling search_ext...")
+    msgid = topology_m2.ms["supplier1"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
     nscpentrywsicnt = 0
-    rtype, rdata, rmsgid = topology_m2.ms["master1"].result2(msgid)
-    topology_m2.ms["master1"].log.info("%d results" % len(rdata))
+    rtype, rdata, rmsgid = topology_m2.ms["supplier1"].result2(msgid)
+    topology_m2.ms["supplier1"].log.info("%d results" % len(rdata))
 
     for dn, attrs in rdata:
         if 'nscpentrywsi' in attrs:
             nscpentrywsicnt += 1
     assert nscpentrywsicnt == 0
-    topology_m2.ms["master1"].log.info("Master1: count of nscpentrywsi: %d" % nscpentrywsicnt)
+    topology_m2.ms["supplier1"].log.info("Supplier1: count of nscpentrywsi: %d" % nscpentrywsicnt)
 
     # bind as bind_entry
-    topology_m2.ms["master2"].log.info("##### Master2: Bind as %s #####" % BIND_DN)
-    topology_m2.ms["master2"].simple_bind_s(BIND_DN, BIND_PW)
+    topology_m2.ms["supplier2"].log.info("##### Supplier2: Bind as %s #####" % BIND_DN)
+    topology_m2.ms["supplier2"].simple_bind_s(BIND_DN, BIND_PW)
 
-    topology_m2.ms["master2"].log.info("Master2: Calling search_ext...")
-    msgid = topology_m2.ms["master2"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
+    topology_m2.ms["supplier2"].log.info("Supplier2: Calling search_ext...")
+    msgid = topology_m2.ms["supplier2"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
     nscpentrywsicnt = 0
-    rtype, rdata, rmsgid = topology_m2.ms["master2"].result2(msgid)
-    topology_m2.ms["master2"].log.info("%d results" % len(rdata))
+    rtype, rdata, rmsgid = topology_m2.ms["supplier2"].result2(msgid)
+    topology_m2.ms["supplier2"].log.info("%d results" % len(rdata))
 
     for dn, attrs in rdata:
         if 'nscpentrywsi' in attrs:
             nscpentrywsicnt += 1
     assert nscpentrywsicnt == 0
-    topology_m2.ms["master2"].log.info("Master2: count of nscpentrywsi: %d" % nscpentrywsicnt)
+    topology_m2.ms["supplier2"].log.info("Supplier2: count of nscpentrywsi: %d" % nscpentrywsicnt)
 
     # bind as anonymous
-    topology_m2.ms["master1"].log.info("##### Master1: Bind as anonymous #####")
-    topology_m2.ms["master1"].simple_bind_s("", "")
+    topology_m2.ms["supplier1"].log.info("##### Supplier1: Bind as anonymous #####")
+    topology_m2.ms["supplier1"].simple_bind_s("", "")
 
-    topology_m2.ms["master1"].log.info("Master1: Calling search_ext...")
-    msgid = topology_m2.ms["master1"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
+    topology_m2.ms["supplier1"].log.info("Supplier1: Calling search_ext...")
+    msgid = topology_m2.ms["supplier1"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
     nscpentrywsicnt = 0
-    rtype, rdata, rmsgid = topology_m2.ms["master1"].result2(msgid)
-    topology_m2.ms["master1"].log.info("%d results" % len(rdata))
+    rtype, rdata, rmsgid = topology_m2.ms["supplier1"].result2(msgid)
+    topology_m2.ms["supplier1"].log.info("%d results" % len(rdata))
 
     for dn, attrs in rdata:
         if 'nscpentrywsi' in attrs:
             nscpentrywsicnt += 1
     assert nscpentrywsicnt == 0
-    topology_m2.ms["master1"].log.info("Master1: count of nscpentrywsi: %d" % nscpentrywsicnt)
+    topology_m2.ms["supplier1"].log.info("Supplier1: count of nscpentrywsi: %d" % nscpentrywsicnt)
 
     # bind as bind_entry
-    topology_m2.ms["master2"].log.info("##### Master2: Bind as anonymous #####")
-    topology_m2.ms["master2"].simple_bind_s("", "")
+    topology_m2.ms["supplier2"].log.info("##### Supplier2: Bind as anonymous #####")
+    topology_m2.ms["supplier2"].simple_bind_s("", "")
 
-    topology_m2.ms["master2"].log.info("Master2: Calling search_ext...")
-    msgid = topology_m2.ms["master2"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
+    topology_m2.ms["supplier2"].log.info("Supplier2: Calling search_ext...")
+    msgid = topology_m2.ms["supplier2"].search_ext(SUFFIX, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
     nscpentrywsicnt = 0
-    rtype, rdata, rmsgid = topology_m2.ms["master2"].result2(msgid)
-    topology_m2.ms["master2"].log.info("%d results" % len(rdata))
+    rtype, rdata, rmsgid = topology_m2.ms["supplier2"].result2(msgid)
+    topology_m2.ms["supplier2"].log.info("%d results" % len(rdata))
 
     for dn, attrs in rdata:
         if 'nscpentrywsi' in attrs:
             nscpentrywsicnt += 1
     assert nscpentrywsicnt == 0
-    topology_m2.ms["master2"].log.info("Master2: count of nscpentrywsi: %d" % nscpentrywsicnt)
+    topology_m2.ms["supplier2"].log.info("Supplier2: count of nscpentrywsi: %d" % nscpentrywsicnt)
 
-    topology_m2.ms["master1"].log.info("##### ticket47869 was successfully verified. #####")
+    topology_m2.ms["supplier1"].log.info("##### ticket47869 was successfully verified. #####")
 
 
 if __name__ == '__main__':

+ 16 - 16
dirsrvtests/tests/tickets/ticket47871_test.py

@@ -41,19 +41,19 @@ def test_ticket47871_init(topology_m1c1):
     """
         Initialize the test environment
     """
-    topology_m1c1.ms["master1"].plugins.enable(name=PLUGIN_RETRO_CHANGELOG)
+    topology_m1c1.ms["supplier1"].plugins.enable(name=PLUGIN_RETRO_CHANGELOG)
     mod = [(ldap.MOD_REPLACE, 'nsslapd-changelogmaxage', b"10s"),  # 10 second triming
            (ldap.MOD_REPLACE, 'nsslapd-changelog-trim-interval', b"5s")]
-    topology_m1c1.ms["master1"].modify_s("cn=%s,%s" % (PLUGIN_RETRO_CHANGELOG, DN_PLUGIN), mod)
-    # topology_m1c1.ms["master1"].plugins.enable(name=PLUGIN_MEMBER_OF)
-    # topology_m1c1.ms["master1"].plugins.enable(name=PLUGIN_REFER_INTEGRITY)
-    topology_m1c1.ms["master1"].stop(timeout=10)
-    topology_m1c1.ms["master1"].start(timeout=10)
+    topology_m1c1.ms["supplier1"].modify_s("cn=%s,%s" % (PLUGIN_RETRO_CHANGELOG, DN_PLUGIN), mod)
+    # topology_m1c1.ms["supplier1"].plugins.enable(name=PLUGIN_MEMBER_OF)
+    # topology_m1c1.ms["supplier1"].plugins.enable(name=PLUGIN_REFER_INTEGRITY)
+    topology_m1c1.ms["supplier1"].stop(timeout=10)
+    topology_m1c1.ms["supplier1"].start(timeout=10)
 
-    topology_m1c1.ms["master1"].log.info("test_ticket47871_init topology_m1c1 %r" % (topology_m1c1))
+    topology_m1c1.ms["supplier1"].log.info("test_ticket47871_init topology_m1c1 %r" % (topology_m1c1))
     # the test case will check if a warning message is logged in the
     # error log of the supplier
-    topology_m1c1.ms["master1"].errorlog_file = open(topology_m1c1.ms["master1"].errlog, "r")
+    topology_m1c1.ms["supplier1"].errorlog_file = open(topology_m1c1.ms["supplier1"].errlog, "r")
 
 
 def test_ticket47871_1(topology_m1c1):
@@ -63,21 +63,21 @@ def test_ticket47871_1(topology_m1c1):
     # add dummy entries
     for cpt in range(MAX_OTHERS):
         name = "%s%d" % (OTHER_NAME, cpt)
-        topology_m1c1.ms["master1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
+        topology_m1c1.ms["supplier1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
             'objectclass': "top person".split(),
             'sn': name,
             'cn': name})))
 
-    topology_m1c1.ms["master1"].log.info(
+    topology_m1c1.ms["supplier1"].log.info(
         "test_ticket47871_init: %d entries ADDed %s[0..%d]" % (MAX_OTHERS, OTHER_NAME, MAX_OTHERS - 1))
 
     # Check the number of entries in the retro changelog
     time.sleep(1)
-    ents = topology_m1c1.ms["master1"].search_s(RETROCL_SUFFIX, ldap.SCOPE_ONELEVEL, "(objectclass=*)")
+    ents = topology_m1c1.ms["supplier1"].search_s(RETROCL_SUFFIX, ldap.SCOPE_ONELEVEL, "(objectclass=*)")
     assert len(ents) == MAX_OTHERS
-    topology_m1c1.ms["master1"].log.info("Added entries are")
+    topology_m1c1.ms["supplier1"].log.info("Added entries are")
     for ent in ents:
-        topology_m1c1.ms["master1"].log.info("%s" % ent.dn)
+        topology_m1c1.ms["supplier1"].log.info("%s" % ent.dn)
 
 
 def test_ticket47871_2(topology_m1c1):
@@ -88,11 +88,11 @@ def test_ticket47871_2(topology_m1c1):
     TRY_NO = 1
     while TRY_NO <= MAX_TRIES:
         time.sleep(6)  # at least 1 trimming occurred
-        ents = topology_m1c1.ms["master1"].search_s(RETROCL_SUFFIX, ldap.SCOPE_ONELEVEL, "(objectclass=*)")
+        ents = topology_m1c1.ms["supplier1"].search_s(RETROCL_SUFFIX, ldap.SCOPE_ONELEVEL, "(objectclass=*)")
         assert len(ents) <= MAX_OTHERS
-        topology_m1c1.ms["master1"].log.info("\nTry no %d it remains %d entries" % (TRY_NO, len(ents)))
+        topology_m1c1.ms["supplier1"].log.info("\nTry no %d it remains %d entries" % (TRY_NO, len(ents)))
         for ent in ents:
-            topology_m1c1.ms["master1"].log.info("%s" % ent.dn)
+            topology_m1c1.ms["supplier1"].log.info("%s" % ent.dn)
         if len(ents) > 1:
             TRY_NO += 1
         else:

+ 99 - 99
dirsrvtests/tests/tickets/ticket47988_test.py

@@ -61,11 +61,11 @@ def _oc_definition(oid_ext, name, must=None, may=None):
 
 
 def _header(topology_m2, label):
-    topology_m2.ms["master1"].log.info("\n\n###############################################")
-    topology_m2.ms["master1"].log.info("#######")
-    topology_m2.ms["master1"].log.info("####### %s" % label)
-    topology_m2.ms["master1"].log.info("#######")
-    topology_m2.ms["master1"].log.info("###################################################")
+    topology_m2.ms["supplier1"].log.info("\n\n###############################################")
+    topology_m2.ms["supplier1"].log.info("#######")
+    topology_m2.ms["supplier1"].log.info("####### %s" % label)
+    topology_m2.ms["supplier1"].log.info("#######")
+    topology_m2.ms["supplier1"].log.info("###################################################")
 
 
 def _install_schema(server, tarFile):
@@ -118,17 +118,17 @@ def test_ticket47988_init(topology_m2):
 
     # enable acl error logging
     mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', ensure_bytes(str(8192)))]  # REPL
-    topology_m2.ms["master1"].modify_s(DN_CONFIG, mod)
-    topology_m2.ms["master2"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier1"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier2"].modify_s(DN_CONFIG, mod)
 
     mod = [(ldap.MOD_REPLACE, 'nsslapd-accesslog-level', ensure_bytes(str(260)))]  # Internal op
-    topology_m2.ms["master1"].modify_s(DN_CONFIG, mod)
-    topology_m2.ms["master2"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier1"].modify_s(DN_CONFIG, mod)
+    topology_m2.ms["supplier2"].modify_s(DN_CONFIG, mod)
 
     # add dummy entries
     for cpt in range(MAX_OTHERS):
         name = "%s%d" % (OTHER_NAME, cpt)
-        topology_m2.ms["master1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
+        topology_m2.ms["supplier1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
             'objectclass': "top person".split(),
             'sn': name,
             'cn': name})))
@@ -138,24 +138,24 @@ def test_ticket47988_init(topology_m2):
     entryDN = "cn=%s0,%s" % (OTHER_NAME, SUFFIX)
     while loop <= 10:
         try:
-            ent = topology_m2.ms["master2"].getEntry(entryDN, ldap.SCOPE_BASE, "(objectclass=*)", ['telephonenumber'])
+            ent = topology_m2.ms["supplier2"].getEntry(entryDN, ldap.SCOPE_BASE, "(objectclass=*)", ['telephonenumber'])
             break
         except ldap.NO_SUCH_OBJECT:
             time.sleep(1)
         loop += 1
     assert (loop <= 10)
 
-    topology_m2.ms["master1"].stop(timeout=10)
-    topology_m2.ms["master2"].stop(timeout=10)
+    topology_m2.ms["supplier1"].stop(timeout=10)
+    topology_m2.ms["supplier2"].stop(timeout=10)
 
     # install the specific schema M1: ipa3.3, M2: ipa4.1
-    schema_file = os.path.join(topology_m2.ms["master1"].getDir(__file__, DATA_DIR), "ticket47988/schema_ipa3.3.tar.gz")
-    _install_schema(topology_m2.ms["master1"], schema_file)
-    schema_file = os.path.join(topology_m2.ms["master1"].getDir(__file__, DATA_DIR), "ticket47988/schema_ipa4.1.tar.gz")
-    _install_schema(topology_m2.ms["master2"], schema_file)
+    schema_file = os.path.join(topology_m2.ms["supplier1"].getDir(__file__, DATA_DIR), "ticket47988/schema_ipa3.3.tar.gz")
+    _install_schema(topology_m2.ms["supplier1"], schema_file)
+    schema_file = os.path.join(topology_m2.ms["supplier1"].getDir(__file__, DATA_DIR), "ticket47988/schema_ipa4.1.tar.gz")
+    _install_schema(topology_m2.ms["supplier2"], schema_file)
 
-    topology_m2.ms["master1"].start(timeout=10)
-    topology_m2.ms["master2"].start(timeout=10)
+    topology_m2.ms["supplier1"].start(timeout=10)
+    topology_m2.ms["supplier2"].start(timeout=10)
 
 
 def _do_update_schema(server, range=3999):
@@ -197,31 +197,31 @@ def _do_update_entry(supplier=None, consumer=None, attempts=10):
 
 
 def _pause_M2_to_M1(topology_m2):
-    topology_m2.ms["master1"].log.info("\n\n######################### Pause RA M2->M1 ######################\n")
-    ents = topology_m2.ms["master2"].agreement.list(suffix=SUFFIX)
+    topology_m2.ms["supplier1"].log.info("\n\n######################### Pause RA M2->M1 ######################\n")
+    ents = topology_m2.ms["supplier2"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
-    topology_m2.ms["master2"].agreement.pause(ents[0].dn)
+    topology_m2.ms["supplier2"].agreement.pause(ents[0].dn)
 
 
 def _resume_M1_to_M2(topology_m2):
-    topology_m2.ms["master1"].log.info("\n\n######################### resume RA M1->M2 ######################\n")
-    ents = topology_m2.ms["master1"].agreement.list(suffix=SUFFIX)
+    topology_m2.ms["supplier1"].log.info("\n\n######################### resume RA M1->M2 ######################\n")
+    ents = topology_m2.ms["supplier1"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
-    topology_m2.ms["master1"].agreement.resume(ents[0].dn)
+    topology_m2.ms["supplier1"].agreement.resume(ents[0].dn)
 
 
 def _pause_M1_to_M2(topology_m2):
-    topology_m2.ms["master1"].log.info("\n\n######################### Pause RA M1->M2 ######################\n")
-    ents = topology_m2.ms["master1"].agreement.list(suffix=SUFFIX)
+    topology_m2.ms["supplier1"].log.info("\n\n######################### Pause RA M1->M2 ######################\n")
+    ents = topology_m2.ms["supplier1"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
-    topology_m2.ms["master1"].agreement.pause(ents[0].dn)
+    topology_m2.ms["supplier1"].agreement.pause(ents[0].dn)
 
 
 def _resume_M2_to_M1(topology_m2):
-    topology_m2.ms["master1"].log.info("\n\n######################### resume RA M2->M1 ######################\n")
-    ents = topology_m2.ms["master2"].agreement.list(suffix=SUFFIX)
+    topology_m2.ms["supplier1"].log.info("\n\n######################### resume RA M2->M1 ######################\n")
+    ents = topology_m2.ms["supplier2"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
-    topology_m2.ms["master2"].agreement.resume(ents[0].dn)
+    topology_m2.ms["supplier2"].agreement.resume(ents[0].dn)
 
 
 def test_ticket47988_1(topology_m2):
@@ -230,8 +230,8 @@ def test_ticket47988_1(topology_m2):
     '''
     _header(topology_m2, 'test_ticket47988_1')
 
-    topology_m2.ms["master1"].log.debug("\n\nCheck that replication is working and pause replication M2->M1\n")
-    _do_update_entry(supplier=topology_m2.ms["master2"], consumer=topology_m2.ms["master1"], attempts=5)
+    topology_m2.ms["supplier1"].log.debug("\n\nCheck that replication is working and pause replication M2->M1\n")
+    _do_update_entry(supplier=topology_m2.ms["supplier2"], consumer=topology_m2.ms["supplier1"], attempts=5)
     _pause_M2_to_M1(topology_m2)
 
 
@@ -242,36 +242,36 @@ def test_ticket47988_2(topology_m2):
     '''
     _header(topology_m2, 'test_ticket47988_2')
 
-    topology_m2.ms["master1"].log.debug("\n\nUpdate M1 schema and an entry on M1\n")
-    master1_schema_csn = topology_m2.ms["master1"].schema.get_schema_csn()
-    master2_schema_csn = topology_m2.ms["master2"].schema.get_schema_csn()
-    topology_m2.ms["master1"].log.debug("\nBefore updating the schema on M1\n")
-    topology_m2.ms["master1"].log.debug("Master1 nsschemaCSN: %s" % master1_schema_csn)
-    topology_m2.ms["master1"].log.debug("Master2 nsschemaCSN: %s" % master2_schema_csn)
+    topology_m2.ms["supplier1"].log.debug("\n\nUpdate M1 schema and an entry on M1\n")
+    supplier1_schema_csn = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    supplier2_schema_csn = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    topology_m2.ms["supplier1"].log.debug("\nBefore updating the schema on M1\n")
+    topology_m2.ms["supplier1"].log.debug("Supplier1 nsschemaCSN: %s" % supplier1_schema_csn)
+    topology_m2.ms["supplier1"].log.debug("Supplier2 nsschemaCSN: %s" % supplier2_schema_csn)
 
     # Here M1 should no, should check M2 schema and learn
-    _do_update_schema(topology_m2.ms["master1"])
-    master1_schema_csn = topology_m2.ms["master1"].schema.get_schema_csn()
-    master2_schema_csn = topology_m2.ms["master2"].schema.get_schema_csn()
-    topology_m2.ms["master1"].log.debug("\nAfter updating the schema on M1\n")
-    topology_m2.ms["master1"].log.debug("Master1 nsschemaCSN: %s" % master1_schema_csn)
-    topology_m2.ms["master1"].log.debug("Master2 nsschemaCSN: %s" % master2_schema_csn)
-    assert (master1_schema_csn)
+    _do_update_schema(topology_m2.ms["supplier1"])
+    supplier1_schema_csn = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    supplier2_schema_csn = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    topology_m2.ms["supplier1"].log.debug("\nAfter updating the schema on M1\n")
+    topology_m2.ms["supplier1"].log.debug("Supplier1 nsschemaCSN: %s" % supplier1_schema_csn)
+    topology_m2.ms["supplier1"].log.debug("Supplier2 nsschemaCSN: %s" % supplier2_schema_csn)
+    assert (supplier1_schema_csn)
 
     # to avoid linger effect where a replication session is reused without checking the schema
     _pause_M1_to_M2(topology_m2)
     _resume_M1_to_M2(topology_m2)
 
-    # topo.master1.log.debug("\n\nSleep.... attach the debugger dse_modify")
+    # topo.supplier1.log.debug("\n\nSleep.... attach the debugger dse_modify")
     # time.sleep(60)
-    _do_update_entry(supplier=topology_m2.ms["master1"], consumer=topology_m2.ms["master2"], attempts=15)
-    master1_schema_csn = topology_m2.ms["master1"].schema.get_schema_csn()
-    master2_schema_csn = topology_m2.ms["master2"].schema.get_schema_csn()
-    topology_m2.ms["master1"].log.debug("\nAfter a full replication session\n")
-    topology_m2.ms["master1"].log.debug("Master1 nsschemaCSN: %s" % master1_schema_csn)
-    topology_m2.ms["master1"].log.debug("Master2 nsschemaCSN: %s" % master2_schema_csn)
-    assert (master1_schema_csn)
-    assert (master2_schema_csn)
+    _do_update_entry(supplier=topology_m2.ms["supplier1"], consumer=topology_m2.ms["supplier2"], attempts=15)
+    supplier1_schema_csn = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    supplier2_schema_csn = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    topology_m2.ms["supplier1"].log.debug("\nAfter a full replication session\n")
+    topology_m2.ms["supplier1"].log.debug("Supplier1 nsschemaCSN: %s" % supplier1_schema_csn)
+    topology_m2.ms["supplier1"].log.debug("Supplier2 nsschemaCSN: %s" % supplier2_schema_csn)
+    assert (supplier1_schema_csn)
+    assert (supplier2_schema_csn)
 
 
 def test_ticket47988_3(topology_m2):
@@ -281,8 +281,8 @@ def test_ticket47988_3(topology_m2):
     _header(topology_m2, 'test_ticket47988_3')
 
     _resume_M2_to_M1(topology_m2)
-    _do_update_entry(supplier=topology_m2.ms["master1"], consumer=topology_m2.ms["master2"], attempts=5)
-    _do_update_entry(supplier=topology_m2.ms["master2"], consumer=topology_m2.ms["master1"], attempts=5)
+    _do_update_entry(supplier=topology_m2.ms["supplier1"], consumer=topology_m2.ms["supplier2"], attempts=5)
+    _do_update_entry(supplier=topology_m2.ms["supplier2"], consumer=topology_m2.ms["supplier1"], attempts=5)
 
 
 def test_ticket47988_4(topology_m2):
@@ -292,16 +292,16 @@ def test_ticket47988_4(topology_m2):
     '''
     _header(topology_m2, 'test_ticket47988_4')
 
-    master1_schema_csn = topology_m2.ms["master1"].schema.get_schema_csn()
-    master2_schema_csn = topology_m2.ms["master2"].schema.get_schema_csn()
-    topology_m2.ms["master1"].log.debug("\n\nMaster1 nsschemaCSN: %s" % master1_schema_csn)
-    topology_m2.ms["master1"].log.debug("\n\nMaster2 nsschemaCSN: %s" % master2_schema_csn)
-    assert (master1_schema_csn)
-    assert (master2_schema_csn)
-    assert (master1_schema_csn == master2_schema_csn)
+    supplier1_schema_csn = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    supplier2_schema_csn = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    topology_m2.ms["supplier1"].log.debug("\n\nSupplier1 nsschemaCSN: %s" % supplier1_schema_csn)
+    topology_m2.ms["supplier1"].log.debug("\n\nSupplier2 nsschemaCSN: %s" % supplier2_schema_csn)
+    assert (supplier1_schema_csn)
+    assert (supplier2_schema_csn)
+    assert (supplier1_schema_csn == supplier2_schema_csn)
 
-    topology_m2.ms["master1"].saved_schema_csn = master1_schema_csn
-    topology_m2.ms["master2"].saved_schema_csn = master2_schema_csn
+    topology_m2.ms["supplier1"].saved_schema_csn = supplier1_schema_csn
+    topology_m2.ms["supplier2"].saved_schema_csn = supplier2_schema_csn
 
 
 def test_ticket47988_5(topology_m2):
@@ -310,18 +310,18 @@ def test_ticket47988_5(topology_m2):
     '''
     _header(topology_m2, 'test_ticket47988_5')
 
-    _do_update_entry(supplier=topology_m2.ms["master1"], consumer=topology_m2.ms["master2"], attempts=5)
-    _do_update_entry(supplier=topology_m2.ms["master2"], consumer=topology_m2.ms["master1"], attempts=5)
-    master1_schema_csn = topology_m2.ms["master1"].schema.get_schema_csn()
-    master2_schema_csn = topology_m2.ms["master2"].schema.get_schema_csn()
-    topology_m2.ms["master1"].log.debug("\n\nMaster1 nsschemaCSN: %s" % master1_schema_csn)
-    topology_m2.ms["master1"].log.debug("\n\nMaster2 nsschemaCSN: %s" % master2_schema_csn)
-    assert (master1_schema_csn)
-    assert (master2_schema_csn)
-    assert (master1_schema_csn == master2_schema_csn)
+    _do_update_entry(supplier=topology_m2.ms["supplier1"], consumer=topology_m2.ms["supplier2"], attempts=5)
+    _do_update_entry(supplier=topology_m2.ms["supplier2"], consumer=topology_m2.ms["supplier1"], attempts=5)
+    supplier1_schema_csn = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    supplier2_schema_csn = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    topology_m2.ms["supplier1"].log.debug("\n\nSupplier1 nsschemaCSN: %s" % supplier1_schema_csn)
+    topology_m2.ms["supplier1"].log.debug("\n\nSupplier2 nsschemaCSN: %s" % supplier2_schema_csn)
+    assert (supplier1_schema_csn)
+    assert (supplier2_schema_csn)
+    assert (supplier1_schema_csn == supplier2_schema_csn)
 
-    assert (topology_m2.ms["master1"].saved_schema_csn == master1_schema_csn)
-    assert (topology_m2.ms["master2"].saved_schema_csn == master2_schema_csn)
+    assert (topology_m2.ms["supplier1"].saved_schema_csn == supplier1_schema_csn)
+    assert (topology_m2.ms["supplier2"].saved_schema_csn == supplier2_schema_csn)
 
 
 def test_ticket47988_6(topology_m2):
@@ -332,36 +332,36 @@ def test_ticket47988_6(topology_m2):
 
     _header(topology_m2, 'test_ticket47988_6')
 
-    topology_m2.ms["master1"].log.debug("\n\nUpdate M1 schema and an entry on M1\n")
-    master1_schema_csn = topology_m2.ms["master1"].schema.get_schema_csn()
-    master2_schema_csn = topology_m2.ms["master2"].schema.get_schema_csn()
-    topology_m2.ms["master1"].log.debug("\nBefore updating the schema on M1\n")
-    topology_m2.ms["master1"].log.debug("Master1 nsschemaCSN: %s" % master1_schema_csn)
-    topology_m2.ms["master1"].log.debug("Master2 nsschemaCSN: %s" % master2_schema_csn)
+    topology_m2.ms["supplier1"].log.debug("\n\nUpdate M1 schema and an entry on M1\n")
+    supplier1_schema_csn = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    supplier2_schema_csn = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    topology_m2.ms["supplier1"].log.debug("\nBefore updating the schema on M1\n")
+    topology_m2.ms["supplier1"].log.debug("Supplier1 nsschemaCSN: %s" % supplier1_schema_csn)
+    topology_m2.ms["supplier1"].log.debug("Supplier2 nsschemaCSN: %s" % supplier2_schema_csn)
 
     # Here M1 should no, should check M2 schema and learn
-    _do_update_schema(topology_m2.ms["master1"], range=5999)
-    master1_schema_csn = topology_m2.ms["master1"].schema.get_schema_csn()
-    master2_schema_csn = topology_m2.ms["master2"].schema.get_schema_csn()
-    topology_m2.ms["master1"].log.debug("\nAfter updating the schema on M1\n")
-    topology_m2.ms["master1"].log.debug("Master1 nsschemaCSN: %s" % master1_schema_csn)
-    topology_m2.ms["master1"].log.debug("Master2 nsschemaCSN: %s" % master2_schema_csn)
-    assert (master1_schema_csn)
+    _do_update_schema(topology_m2.ms["supplier1"], range=5999)
+    supplier1_schema_csn = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    supplier2_schema_csn = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    topology_m2.ms["supplier1"].log.debug("\nAfter updating the schema on M1\n")
+    topology_m2.ms["supplier1"].log.debug("Supplier1 nsschemaCSN: %s" % supplier1_schema_csn)
+    topology_m2.ms["supplier1"].log.debug("Supplier2 nsschemaCSN: %s" % supplier2_schema_csn)
+    assert (supplier1_schema_csn)
 
     # to avoid linger effect where a replication session is reused without checking the schema
     _pause_M1_to_M2(topology_m2)
     _resume_M1_to_M2(topology_m2)
 
-    # topo.master1.log.debug("\n\nSleep.... attach the debugger dse_modify")
+    # topo.supplier1.log.debug("\n\nSleep.... attach the debugger dse_modify")
     # time.sleep(60)
-    _do_update_entry(supplier=topology_m2.ms["master2"], consumer=topology_m2.ms["master1"], attempts=15)
-    master1_schema_csn = topology_m2.ms["master1"].schema.get_schema_csn()
-    master2_schema_csn = topology_m2.ms["master2"].schema.get_schema_csn()
-    topology_m2.ms["master1"].log.debug("\nAfter a full replication session\n")
-    topology_m2.ms["master1"].log.debug("Master1 nsschemaCSN: %s" % master1_schema_csn)
-    topology_m2.ms["master1"].log.debug("Master2 nsschemaCSN: %s" % master2_schema_csn)
-    assert (master1_schema_csn)
-    assert (master2_schema_csn)
+    _do_update_entry(supplier=topology_m2.ms["supplier2"], consumer=topology_m2.ms["supplier1"], attempts=15)
+    supplier1_schema_csn = topology_m2.ms["supplier1"].schema.get_schema_csn()
+    supplier2_schema_csn = topology_m2.ms["supplier2"].schema.get_schema_csn()
+    topology_m2.ms["supplier1"].log.debug("\nAfter a full replication session\n")
+    topology_m2.ms["supplier1"].log.debug("Supplier1 nsschemaCSN: %s" % supplier1_schema_csn)
+    topology_m2.ms["supplier1"].log.debug("Supplier2 nsschemaCSN: %s" % supplier2_schema_csn)
+    assert (supplier1_schema_csn)
+    assert (supplier2_schema_csn)
 
 
 if __name__ == '__main__':

+ 43 - 43
dirsrvtests/tests/tickets/ticket48266_test.py

@@ -4,7 +4,7 @@ from lib389.utils import *
 from lib389.topologies import topology_m2
 from lib389.replica import ReplicationManager
 
-from lib389._constants import SUFFIX, DEFAULT_SUFFIX, HOST_MASTER_2, PORT_MASTER_2
+from lib389._constants import SUFFIX, DEFAULT_SUFFIX, HOST_SUPPLIER_2, PORT_SUPPLIER_2
 
 pytestmark = pytest.mark.tier2
 
@@ -20,53 +20,53 @@ def entries(topology_m2):
     # add dummy entries in the staging DIT
     for cpt in range(MAX_ACCOUNTS):
         name = "%s%d" % (NEW_ACCOUNT, cpt)
-        topology_m2.ms["master1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
+        topology_m2.ms["supplier1"].add_s(Entry(("cn=%s,%s" % (name, SUFFIX), {
             'objectclass': "top person".split(),
             'sn': name,
             'cn': name})))
-    topology_m2.ms["master1"].config.set('nsslapd-accesslog-logbuffering', 'off')
-    topology_m2.ms["master1"].config.set('nsslapd-errorlog-level', '8192')
+    topology_m2.ms["supplier1"].config.set('nsslapd-accesslog-logbuffering', 'off')
+    topology_m2.ms["supplier1"].config.set('nsslapd-errorlog-level', '8192')
     # 256 + 4
-    topology_m2.ms["master1"].config.set('nsslapd-accesslog-level', '260')
+    topology_m2.ms["supplier1"].config.set('nsslapd-accesslog-level', '260')
 
-    topology_m2.ms["master2"].config.set('nsslapd-accesslog-logbuffering', 'off')
-    topology_m2.ms["master2"].config.set('nsslapd-errorlog-level', '8192')
+    topology_m2.ms["supplier2"].config.set('nsslapd-accesslog-logbuffering', 'off')
+    topology_m2.ms["supplier2"].config.set('nsslapd-errorlog-level', '8192')
     # 256 + 4
-    topology_m2.ms["master2"].config.set('nsslapd-accesslog-level', '260')
+    topology_m2.ms["supplier2"].config.set('nsslapd-accesslog-level', '260')
 
 
 def test_ticket48266_fractional(topology_m2, entries):
-    ents = topology_m2.ms["master1"].agreement.list(suffix=SUFFIX)
+    ents = topology_m2.ms["supplier1"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
 
     mod = [(ldap.MOD_REPLACE, 'nsDS5ReplicatedAttributeList', [b'(objectclass=*) $ EXCLUDE telephonenumber']),
            (ldap.MOD_REPLACE, 'nsds5ReplicaStripAttrs', [b'modifiersname modifytimestamp'])]
-    ents = topology_m2.ms["master1"].agreement.list(suffix=SUFFIX)
+    ents = topology_m2.ms["supplier1"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
     m1_m2_agmt = ents[0].dn
-    topology_m2.ms["master1"].modify_s(ents[0].dn, mod)
+    topology_m2.ms["supplier1"].modify_s(ents[0].dn, mod)
 
-    ents = topology_m2.ms["master2"].agreement.list(suffix=SUFFIX)
+    ents = topology_m2.ms["supplier2"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
-    topology_m2.ms["master2"].modify_s(ents[0].dn, mod)
+    topology_m2.ms["supplier2"].modify_s(ents[0].dn, mod)
 
-    topology_m2.ms["master1"].restart()
-    topology_m2.ms["master2"].restart()
+    topology_m2.ms["supplier1"].restart()
+    topology_m2.ms["supplier2"].restart()
 
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    repl.ensure_agreement(topology_m2.ms["master1"], topology_m2.ms["master2"])
-    repl.test_replication(topology_m2.ms["master1"], topology_m2.ms["master2"])
+    repl.ensure_agreement(topology_m2.ms["supplier1"], topology_m2.ms["supplier2"])
+    repl.test_replication(topology_m2.ms["supplier1"], topology_m2.ms["supplier2"])
 
 
 def test_ticket48266_check_repl_desc(topology_m2, entries):
     name = "cn=%s1,%s" % (NEW_ACCOUNT, SUFFIX)
     value = 'check repl. description'
     mod = [(ldap.MOD_REPLACE, 'description', ensure_bytes(value))]
-    topology_m2.ms["master1"].modify_s(name, mod)
+    topology_m2.ms["supplier1"].modify_s(name, mod)
 
     loop = 0
     while loop <= 10:
-        ent = topology_m2.ms["master2"].getEntry(name, ldap.SCOPE_BASE, "(objectclass=*)")
+        ent = topology_m2.ms["supplier2"].getEntry(name, ldap.SCOPE_BASE, "(objectclass=*)")
         if ent.hasAttr('description') and ent.getValue('description') == ensure_bytes(value):
             break
         time.sleep(1)
@@ -82,12 +82,12 @@ def _get_last_not_replicated_csn(topology_m2):
 
     # read the first CSN that will not be replicated
     mod = [(ldap.MOD_REPLACE, 'telephonenumber', ensure_bytes('123456'))]
-    topology_m2.ms["master1"].modify_s(name, mod)
-    msgid = topology_m2.ms["master1"].search_ext(name, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
-    rtype, rdata, rmsgid = topology_m2.ms["master1"].result2(msgid)
+    topology_m2.ms["supplier1"].modify_s(name, mod)
+    msgid = topology_m2.ms["supplier1"].search_ext(name, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
+    rtype, rdata, rmsgid = topology_m2.ms["supplier1"].result2(msgid)
     attrs = None
     for dn, raw_attrs in rdata:
-        topology_m2.ms["master1"].log.info("dn: %s" % dn)
+        topology_m2.ms["supplier1"].log.info("dn: %s" % dn)
         if 'nscpentrywsi' in raw_attrs:
             attrs = raw_attrs['nscpentrywsi']
     assert attrs
@@ -99,15 +99,15 @@ def _get_last_not_replicated_csn(topology_m2):
     log.info("############# %s " % name)
     # now retrieve the CSN of the operation we are looking for
     csn = None
-    found_ops = topology_m2.ms['master1'].ds_access_log.match(".*MOD dn=\"%s\".*" % name)
+    found_ops = topology_m2.ms['supplier1'].ds_access_log.match(".*MOD dn=\"%s\".*" % name)
     assert(len(found_ops) > 0)
-    found_op = topology_m2.ms['master1'].ds_access_log.parse_line(found_ops[-1])
+    found_op = topology_m2.ms['supplier1'].ds_access_log.parse_line(found_ops[-1])
     log.info(found_op)
 
     # Now look for the related CSN
-    found_csns = topology_m2.ms['master1'].ds_access_log.match(".*conn=%s op=%s RESULT.*" % (found_op['conn'], found_op['op']))
+    found_csns = topology_m2.ms['supplier1'].ds_access_log.match(".*conn=%s op=%s RESULT.*" % (found_op['conn'], found_op['op']))
     assert(len(found_csns) > 0)
-    found_csn = topology_m2.ms['master1'].ds_access_log.parse_line(found_csns[-1])
+    found_csn = topology_m2.ms['supplier1'].ds_access_log.parse_line(found_csns[-1])
     log.info(found_csn)
     return found_csn['csn']
 
@@ -117,12 +117,12 @@ def _get_first_not_replicated_csn(topology_m2):
 
     # read the first CSN that will not be replicated
     mod = [(ldap.MOD_REPLACE, 'telephonenumber', ensure_bytes('123456'))]
-    topology_m2.ms["master1"].modify_s(name, mod)
-    msgid = topology_m2.ms["master1"].search_ext(name, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
-    rtype, rdata, rmsgid = topology_m2.ms["master1"].result2(msgid)
+    topology_m2.ms["supplier1"].modify_s(name, mod)
+    msgid = topology_m2.ms["supplier1"].search_ext(name, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi'])
+    rtype, rdata, rmsgid = topology_m2.ms["supplier1"].result2(msgid)
     attrs = None
     for dn, raw_attrs in rdata:
-        topology_m2.ms["master1"].log.info("dn: %s" % dn)
+        topology_m2.ms["supplier1"].log.info("dn: %s" % dn)
         if 'nscpentrywsi' in raw_attrs:
             attrs = raw_attrs['nscpentrywsi']
     assert attrs
@@ -134,15 +134,15 @@ def _get_first_not_replicated_csn(topology_m2):
     log.info("############# %s " % name)
     # now retrieve the CSN of the operation we are looking for
     csn = None
-    found_ops = topology_m2.ms['master1'].ds_access_log.match(".*MOD dn=\"%s\".*" % name)
+    found_ops = topology_m2.ms['supplier1'].ds_access_log.match(".*MOD dn=\"%s\".*" % name)
     assert(len(found_ops) > 0)
-    found_op = topology_m2.ms['master1'].ds_access_log.parse_line(found_ops[-1])
+    found_op = topology_m2.ms['supplier1'].ds_access_log.parse_line(found_ops[-1])
     log.info(found_op)
 
     # Now look for the related CSN
-    found_csns = topology_m2.ms['master1'].ds_access_log.match(".*conn=%s op=%s RESULT.*" % (found_op['conn'], found_op['op']))
+    found_csns = topology_m2.ms['supplier1'].ds_access_log.match(".*conn=%s op=%s RESULT.*" % (found_op['conn'], found_op['op']))
     assert(len(found_csns) > 0)
-    found_csn = topology_m2.ms['master1'].ds_access_log.parse_line(found_csns[-1])
+    found_csn = topology_m2.ms['supplier1'].ds_access_log.parse_line(found_csns[-1])
     log.info(found_csn)
     return found_csn['csn']
 
@@ -151,7 +151,7 @@ def _count_full_session(topology_m2):
     #
     # compute the number of 'No more updates'
     #
-    file_obj = open(topology_m2.ms["master1"].errlog, "r")
+    file_obj = open(topology_m2.ms["supplier1"].errlog, "r")
     # pattern to find
     pattern = ".*No more updates to send.*"
     regex = re.compile(pattern)
@@ -171,20 +171,20 @@ def _count_full_session(topology_m2):
 
 
 def test_ticket48266_count_csn_evaluation(topology_m2, entries):
-    ents = topology_m2.ms["master1"].agreement.list(suffix=SUFFIX)
+    ents = topology_m2.ms["supplier1"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
     first_csn = _get_first_not_replicated_csn(topology_m2)
     name = "cn=%s3,%s" % (NEW_ACCOUNT, SUFFIX)
     NB_SESSION = 102
 
     no_more_update_cnt = _count_full_session(topology_m2)
-    topology_m2.ms["master1"].agreement.pause(ents[0].dn)
+    topology_m2.ms["supplier1"].agreement.pause(ents[0].dn)
     # now do a set of updates that will NOT be replicated
     for telNumber in range(NB_SESSION):
         mod = [(ldap.MOD_REPLACE, 'telephonenumber', ensure_bytes(str(telNumber)))]
-        topology_m2.ms["master1"].modify_s(name, mod)
+        topology_m2.ms["supplier1"].modify_s(name, mod)
 
-    topology_m2.ms["master1"].agreement.resume(ents[0].dn)
+    topology_m2.ms["supplier1"].agreement.resume(ents[0].dn)
 
     # let's wait all replication session complete
     MAX_LOOP = 10
@@ -222,9 +222,9 @@ def test_ticket48266_count_csn_evaluation(topology_m2, entries):
 
     # so we should no longer see the first_csn in the log
     # Let's create a new csn (last_csn) and check there is no longer first_csn
-    topology_m2.ms["master1"].agreement.pause(ents[0].dn)
+    topology_m2.ms["supplier1"].agreement.pause(ents[0].dn)
     last_csn = _get_last_not_replicated_csn(topology_m2)
-    topology_m2.ms["master1"].agreement.resume(ents[0].dn)
+    topology_m2.ms["supplier1"].agreement.resume(ents[0].dn)
 
     # let's wait for the session to complete
     MAX_LOOP = 10
@@ -242,7 +242,7 @@ def test_ticket48266_count_csn_evaluation(topology_m2, entries):
 
     # Now determine how many times we have skipped 'csn'
     # no need to stop the server to check the error log
-    file_obj = open(topology_m2.ms["master1"].errlog, "r")
+    file_obj = open(topology_m2.ms["supplier1"].errlog, "r")
 
     # find where the last_csn operation was processed
     pattern = ".*ruv_add_csn_inprogress: successfully inserted csn %s.*" % last_csn

+ 12 - 12
dirsrvtests/tests/tickets/ticket48325_test.py

@@ -44,10 +44,10 @@ def test_ticket48325(topology_m1h1c1):
     """
 
     #
-    # Promote consumer to master
+    # Promote consumer to supplier
     #
     C1 = topology_m1h1c1.cs["consumer1"]
-    M1 = topology_m1h1c1.ms["master1"]
+    M1 = topology_m1h1c1.ms["supplier1"]
     H1 = topology_m1h1c1.hs["hub1"]
     repl = ReplicationManager(DEFAULT_SUFFIX)
     repl._ensure_changelog(C1)
@@ -70,37 +70,37 @@ def test_ticket48325(topology_m1h1c1):
         log.fatal('RUV was not reordered')
         assert False
 
-    topology_m1h1c1.ms["master1"].add_s(Entry((defaultProperties[REPLICATION_BIND_DN],
+    topology_m1h1c1.ms["supplier1"].add_s(Entry((defaultProperties[REPLICATION_BIND_DN],
                                                {'objectclass': 'top netscapeServer'.split(),
                                                 'cn': 'replication manager',
                                                 'userPassword': 'password'})))
 
-    DN = topology_m1h1c1.ms["master1"].replica._get_mt_entry(DEFAULT_SUFFIX)
-    topology_m1h1c1.ms["master1"].modify_s(DN, [(ldap.MOD_REPLACE,
+    DN = topology_m1h1c1.ms["supplier1"].replica._get_mt_entry(DEFAULT_SUFFIX)
+    topology_m1h1c1.ms["supplier1"].modify_s(DN, [(ldap.MOD_REPLACE,
                                                  'nsDS5ReplicaBindDN', ensure_bytes(defaultProperties[REPLICATION_BIND_DN]))])
     #
-    # Create repl agreement from the newly promoted master to master1
+    # Create repl agreement from the newly promoted supplier to supplier1
 
-    properties = {RA_NAME: 'meTo_{}:{}'.format(topology_m1h1c1.ms["master1"].host,
-                                               str(topology_m1h1c1.ms["master1"].port)),
+    properties = {RA_NAME: 'meTo_{}:{}'.format(topology_m1h1c1.ms["supplier1"].host,
+                                               str(topology_m1h1c1.ms["supplier1"].port)),
                   RA_BINDDN: defaultProperties[REPLICATION_BIND_DN],
                   RA_BINDPW: defaultProperties[REPLICATION_BIND_PW],
                   RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD],
                   RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]}
     new_agmt = topology_m1h1c1.cs["consumer1"].agreement.create(suffix=SUFFIX,
-                                                                host=topology_m1h1c1.ms["master1"].host,
-                                                                port=topology_m1h1c1.ms["master1"].port,
+                                                                host=topology_m1h1c1.ms["supplier1"].host,
+                                                                port=topology_m1h1c1.ms["supplier1"].port,
                                                                 properties=properties)
 
     if not new_agmt:
-        log.fatal("Fail to create new agmt from old consumer to the master")
+        log.fatal("Fail to create new agmt from old consumer to the supplier")
         assert False
 
     # Test replication is working
     repl.test_replication(C1, M1)
 
     #
-    # Promote hub to master
+    # Promote hub to supplier
     #
     DN = topology_m1h1c1.hs["hub1"].replica._get_mt_entry(DEFAULT_SUFFIX)
     topology_m1h1c1.hs["hub1"].modify_s(DN, [(ldap.MOD_REPLACE,

+ 21 - 21
dirsrvtests/tests/tickets/ticket48342_test.py

@@ -51,7 +51,7 @@ def _dna_config(server, nextValue=500, maxValue=510):
 def test_ticket4026(topology_m3):
     """Write your replication testcase here.
 
-    To access each DirSrv instance use:  topology_m3.ms["master1"], topology_m3.ms["master2"],
+    To access each DirSrv instance use:  topology_m3.ms["supplier1"], topology_m3.ms["supplier2"],
         ..., topology_m3.hub1, ..., topology_m3.consumer1, ...
 
     Also, if you need any testcase initialization,
@@ -59,19 +59,19 @@ def test_ticket4026(topology_m3):
     """
 
     try:
-        topology_m3.ms["master1"].add_s(Entry((PEOPLE_DN, {
+        topology_m3.ms["supplier1"].add_s(Entry((PEOPLE_DN, {
             'objectclass': "top extensibleObject".split(),
             'ou': 'people'})))
     except ldap.ALREADY_EXISTS:
         pass
 
-    topology_m3.ms["master1"].add_s(Entry(('ou=ranges,' + SUFFIX, {
+    topology_m3.ms["supplier1"].add_s(Entry(('ou=ranges,' + SUFFIX, {
         'objectclass': 'top organizationalunit'.split(),
         'ou': 'ranges'
     })))
     for cpt in range(MAX_ACCOUNTS):
         name = "user%d" % (cpt)
-        topology_m3.ms["master1"].add_s(Entry(("uid=%s,%s" % (name, PEOPLE_DN), {
+        topology_m3.ms["supplier1"].add_s(Entry(("uid=%s,%s" % (name, PEOPLE_DN), {
             'objectclass': 'top posixAccount extensibleObject'.split(),
             'uid': name,
             'cn': name,
@@ -80,28 +80,28 @@ def test_ticket4026(topology_m3):
             'homeDirectory': '/home/%s' % name
         })))
 
-    # make master3 having more free slots that master2
-    # so master1 will contact master3
-    _dna_config(topology_m3.ms["master1"], nextValue=100, maxValue=10)
-    _dna_config(topology_m3.ms["master2"], nextValue=200, maxValue=10)
-    _dna_config(topology_m3.ms["master3"], nextValue=300, maxValue=3000)
+    # make supplier3 having more free slots that supplier2
+    # so supplier1 will contact supplier3
+    _dna_config(topology_m3.ms["supplier1"], nextValue=100, maxValue=10)
+    _dna_config(topology_m3.ms["supplier2"], nextValue=200, maxValue=10)
+    _dna_config(topology_m3.ms["supplier3"], nextValue=300, maxValue=3000)
 
     # Turn on lots of error logging now.
 
     mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', b'16384')]
     # mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', '1')]
-    topology_m3.ms["master1"].modify_s('cn=config', mod)
-    topology_m3.ms["master2"].modify_s('cn=config', mod)
-    topology_m3.ms["master3"].modify_s('cn=config', mod)
+    topology_m3.ms["supplier1"].modify_s('cn=config', mod)
+    topology_m3.ms["supplier2"].modify_s('cn=config', mod)
+    topology_m3.ms["supplier3"].modify_s('cn=config', mod)
 
     # We need to wait for the event in dna.c to fire to start the servers
     # see dna.c line 899
     time.sleep(60)
 
-    # add on master1 users with description DNA
+    # add on supplier1 users with description DNA
     for cpt in range(10):
         name = "user_with_desc1_%d" % (cpt)
-        topology_m3.ms["master1"].add_s(Entry(("uid=%s,%s" % (name, PEOPLE_DN), {
+        topology_m3.ms["supplier1"].add_s(Entry(("uid=%s,%s" % (name, PEOPLE_DN), {
             'objectclass': 'top posixAccount extensibleObject'.split(),
             'uid': name,
             'cn': name,
@@ -110,12 +110,12 @@ def test_ticket4026(topology_m3):
             'gidNumber': '1',
             'homeDirectory': '/home/%s' % name
         })))
-    # give time to negociate master1 <--> master3
+    # give time to negociate supplier1 <--> supplier3
     time.sleep(10)
-    # add on master1 users with description DNA
+    # add on supplier1 users with description DNA
     for cpt in range(11, 20):
         name = "user_with_desc1_%d" % (cpt)
-        topology_m3.ms["master1"].add_s(Entry(("uid=%s,%s" % (name, PEOPLE_DN), {
+        topology_m3.ms["supplier1"].add_s(Entry(("uid=%s,%s" % (name, PEOPLE_DN), {
             'objectclass': 'top posixAccount extensibleObject'.split(),
             'uid': name,
             'cn': name,
@@ -125,12 +125,12 @@ def test_ticket4026(topology_m3):
             'homeDirectory': '/home/%s' % name
         })))
     log.info('Test complete')
-    # add on master1 users with description DNA
+    # add on supplier1 users with description DNA
     mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', b'16384')]
     # mod = [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', '1')]
-    topology_m3.ms["master1"].modify_s('cn=config', mod)
-    topology_m3.ms["master2"].modify_s('cn=config', mod)
-    topology_m3.ms["master3"].modify_s('cn=config', mod)
+    topology_m3.ms["supplier1"].modify_s('cn=config', mod)
+    topology_m3.ms["supplier2"].modify_s('cn=config', mod)
+    topology_m3.ms["supplier3"].modify_s('cn=config', mod)
 
     log.info('Test complete')
 

+ 22 - 22
dirsrvtests/tests/tickets/ticket48362_test.py

@@ -97,7 +97,7 @@ def _shared_cfg_server_update(server, method=BINDMETHOD_VALUE, transport=PROTOCO
 def test_ticket48362(topology_m2):
     """Write your replication testcase here.
 
-    To access each DirSrv instance use:  topology_m2.ms["master1"], topology_m2.ms["master2"],
+    To access each DirSrv instance use:  topology_m2.ms["supplier1"], topology_m2.ms["supplier2"],
         ..., topology_m2.hub1, ..., topology_m2.consumer1, ...
 
     Also, if you need any testcase initialization,
@@ -105,57 +105,57 @@ def test_ticket48362(topology_m2):
     """
 
     try:
-        topology_m2.ms["master1"].add_s(Entry((PEOPLE_DN, {
+        topology_m2.ms["supplier1"].add_s(Entry((PEOPLE_DN, {
             'objectclass': "top extensibleObject".split(),
             'ou': 'people'})))
     except ldap.ALREADY_EXISTS:
         pass
 
-    topology_m2.ms["master1"].add_s(Entry((SHARE_CFG_BASE, {
+    topology_m2.ms["supplier1"].add_s(Entry((SHARE_CFG_BASE, {
         'objectclass': 'top organizationalunit'.split(),
         'ou': 'ranges'
     })))
-    # master 1 will have a valid remaining range (i.e. 101)
-    # master 2 will not have a valid remaining range (i.e. 0) so dna servers list on master2
-    # will not contain master 2. So at restart, master 2 is recreated without the method/protocol attribute
-    _dna_config(topology_m2.ms["master1"], nextValue=1000, maxValue=100)
-    _dna_config(topology_m2.ms["master2"], nextValue=2000, maxValue=-1)
+    # supplier 1 will have a valid remaining range (i.e. 101)
+    # supplier 2 will not have a valid remaining range (i.e. 0) so dna servers list on supplier2
+    # will not contain supplier 2. So at restart, supplier 2 is recreated without the method/protocol attribute
+    _dna_config(topology_m2.ms["supplier1"], nextValue=1000, maxValue=100)
+    _dna_config(topology_m2.ms["supplier2"], nextValue=2000, maxValue=-1)
 
     # check we have all the servers available
-    _wait_shared_cfg_servers(topology_m2.ms["master1"], 2)
-    _wait_shared_cfg_servers(topology_m2.ms["master2"], 2)
+    _wait_shared_cfg_servers(topology_m2.ms["supplier1"], 2)
+    _wait_shared_cfg_servers(topology_m2.ms["supplier2"], 2)
 
     # now force the method/transport on the servers entry
-    _shared_cfg_server_update(topology_m2.ms["master1"])
-    _shared_cfg_server_update(topology_m2.ms["master2"])
+    _shared_cfg_server_update(topology_m2.ms["supplier1"])
+    _shared_cfg_server_update(topology_m2.ms["supplier2"])
 
     log.info('\n======================== BEFORE RESTART ============================\n')
-    ent = topology_m2.ms["master1"].getEntry(SHARE_CFG_BASE, ldap.SCOPE_ONELEVEL,
-                                             "(dnaPortNum=%d)" % topology_m2.ms["master1"].port)
+    ent = topology_m2.ms["supplier1"].getEntry(SHARE_CFG_BASE, ldap.SCOPE_ONELEVEL,
+                                             "(dnaPortNum=%d)" % topology_m2.ms["supplier1"].port)
     log.info('\n======================== BEFORE RESTART ============================\n')
     assert (ent.hasAttr(BINDMETHOD_ATTR) and ent.getValue(BINDMETHOD_ATTR) == BINDMETHOD_VALUE)
     assert (ent.hasAttr(PROTOCOLE_ATTR) and ent.getValue(PROTOCOLE_ATTR) == PROTOCOLE_VALUE)
 
-    ent = topology_m2.ms["master2"].getEntry(SHARE_CFG_BASE, ldap.SCOPE_ONELEVEL,
-                                             "(dnaPortNum=%d)" % topology_m2.ms["master2"].port)
+    ent = topology_m2.ms["supplier2"].getEntry(SHARE_CFG_BASE, ldap.SCOPE_ONELEVEL,
+                                             "(dnaPortNum=%d)" % topology_m2.ms["supplier2"].port)
     log.info('\n======================== BEFORE RESTART ============================\n')
     assert (ent.hasAttr(BINDMETHOD_ATTR) and ent.getValue(BINDMETHOD_ATTR) == BINDMETHOD_VALUE)
     assert (ent.hasAttr(PROTOCOLE_ATTR) and ent.getValue(PROTOCOLE_ATTR) == PROTOCOLE_VALUE)
-    topology_m2.ms["master1"].restart(10)
-    topology_m2.ms["master2"].restart(10)
+    topology_m2.ms["supplier1"].restart(10)
+    topology_m2.ms["supplier2"].restart(10)
 
     # to allow DNA plugin to recreate the local host entry
     time.sleep(40)
 
     log.info('\n=================== AFTER RESTART =================================\n')
-    ent = topology_m2.ms["master1"].getEntry(SHARE_CFG_BASE, ldap.SCOPE_ONELEVEL,
-                                             "(dnaPortNum=%d)" % topology_m2.ms["master1"].port)
+    ent = topology_m2.ms["supplier1"].getEntry(SHARE_CFG_BASE, ldap.SCOPE_ONELEVEL,
+                                             "(dnaPortNum=%d)" % topology_m2.ms["supplier1"].port)
     log.info('\n=================== AFTER RESTART =================================\n')
     assert (ent.hasAttr(BINDMETHOD_ATTR) and ent.getValue(BINDMETHOD_ATTR) == BINDMETHOD_VALUE)
     assert (ent.hasAttr(PROTOCOLE_ATTR) and ent.getValue(PROTOCOLE_ATTR) == PROTOCOLE_VALUE)
 
-    ent = topology_m2.ms["master2"].getEntry(SHARE_CFG_BASE, ldap.SCOPE_ONELEVEL,
-                                             "(dnaPortNum=%d)" % topology_m2.ms["master2"].port)
+    ent = topology_m2.ms["supplier2"].getEntry(SHARE_CFG_BASE, ldap.SCOPE_ONELEVEL,
+                                             "(dnaPortNum=%d)" % topology_m2.ms["supplier2"].port)
     log.info('\n=================== AFTER RESTART =================================\n')
     assert (ent.hasAttr(BINDMETHOD_ATTR) and ent.getValue(BINDMETHOD_ATTR) == BINDMETHOD_VALUE)
     assert (ent.hasAttr(PROTOCOLE_ATTR) and ent.getValue(PROTOCOLE_ATTR) == PROTOCOLE_VALUE)

+ 2 - 2
dirsrvtests/tests/tickets/ticket48759_test.py

@@ -14,7 +14,7 @@ from lib389.utils import *
 from lib389.topologies import topology_st
 from lib389.replica import ReplicationManager,Replicas
 
-from lib389._constants import (PLUGIN_MEMBER_OF, DEFAULT_SUFFIX, ReplicaRole, REPLICAID_MASTER_1,
+from lib389._constants import (PLUGIN_MEMBER_OF, DEFAULT_SUFFIX, ReplicaRole, REPLICAID_SUPPLIER_1,
                                PLUGIN_RETRO_CHANGELOG, REPLICA_PRECISE_PURGING, REPLICA_PURGE_DELAY,
                                REPLICA_PURGE_INTERVAL)
 
@@ -110,7 +110,7 @@ def test_ticket48759(topology_st):
     #
     log.info('Setting up replication...')
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    repl.create_first_master(topology_st.standalone)
+    repl.create_first_supplier(topology_st.standalone)
     #
     # enable dynamic plugins, memberof and retro cl plugin
     #

+ 31 - 31
dirsrvtests/tests/tickets/ticket48784_test.py

@@ -48,22 +48,22 @@ def add_entry(server, name, rdntmpl, start, num):
 
 def config_tls_agreements(topology_m2):
     log.info("######################### Configure SSL/TLS agreements ######################")
-    log.info("######################## master1 <-- startTLS -> master2 #####################")
+    log.info("######################## supplier1 <-- startTLS -> supplier2 #####################")
 
-    log.info("##### Update the agreement of master1")
-    m1 = topology_m2.ms["master1"]
+    log.info("##### Update the agreement of supplier1")
+    m1 = topology_m2.ms["supplier1"]
     m1_m2_agmt = m1.agreement.list(suffix=DEFAULT_SUFFIX)[0].dn
-    topology_m2.ms["master1"].modify_s(m1_m2_agmt, [(ldap.MOD_REPLACE, 'nsDS5ReplicaTransportInfo', b'TLS')])
+    topology_m2.ms["supplier1"].modify_s(m1_m2_agmt, [(ldap.MOD_REPLACE, 'nsDS5ReplicaTransportInfo', b'TLS')])
 
-    log.info("##### Update the agreement of master2")
-    m2 = topology_m2.ms["master2"]
+    log.info("##### Update the agreement of supplier2")
+    m2 = topology_m2.ms["supplier2"]
     m2_m1_agmt = m2.agreement.list(suffix=DEFAULT_SUFFIX)[0].dn
-    topology_m2.ms["master2"].modify_s(m2_m1_agmt, [(ldap.MOD_REPLACE, 'nsDS5ReplicaTransportInfo', b'TLS')])
+    topology_m2.ms["supplier2"].modify_s(m2_m1_agmt, [(ldap.MOD_REPLACE, 'nsDS5ReplicaTransportInfo', b'TLS')])
 
     time.sleep(1)
 
-    topology_m2.ms["master1"].restart(10)
-    topology_m2.ms["master2"].restart(10)
+    topology_m2.ms["supplier1"].restart(10)
+    topology_m2.ms["supplier2"].restart(10)
 
     log.info("\n######################### Configure SSL/TLS agreements Done ######################\n")
 
@@ -81,10 +81,10 @@ def set_ssl_Version(server, name, version):
 def test_ticket48784(topology_m2):
     """
     Set up 2way MMR:
-        master_1 <----- startTLS -----> master_2
+        supplier_1 <----- startTLS -----> supplier_2
 
     Make sure the replication is working.
-    Then, stop the servers and set only TLS1.0 on master_1 while TLS1.2 on master_2
+    Then, stop the servers and set only TLS1.0 on supplier_1 while TLS1.2 on supplier_2
     Replication is supposed to fail.
     """
     log.info("Ticket 48784 - Allow usage of OpenLDAP libraries that don't use NSS for crypto")
@@ -94,40 +94,40 @@ def test_ticket48784(topology_m2):
 
     config_tls_agreements(topology_m2)
 
-    add_entry(topology_m2.ms["master1"], 'master1', 'uid=m1user', 0, 5)
-    add_entry(topology_m2.ms["master2"], 'master2', 'uid=m2user', 0, 5)
+    add_entry(topology_m2.ms["supplier1"], 'supplier1', 'uid=m1user', 0, 5)
+    add_entry(topology_m2.ms["supplier2"], 'supplier2', 'uid=m2user', 0, 5)
 
     time.sleep(10)
 
-    log.info('##### Searching for entries on master1...')
-    entries = topology_m2.ms["master1"].search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
+    log.info('##### Searching for entries on supplier1...')
+    entries = topology_m2.ms["supplier1"].search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
     assert 10 == len(entries)
 
-    log.info('##### Searching for entries on master2...')
-    entries = topology_m2.ms["master2"].search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
+    log.info('##### Searching for entries on supplier2...')
+    entries = topology_m2.ms["supplier2"].search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
     assert 10 == len(entries)
 
     log.info("##### openldap client just accepts sslVersionMin not Max.")
-    set_ssl_Version(topology_m2.ms["master1"], 'master1', 'TLS1.0')
-    set_ssl_Version(topology_m2.ms["master2"], 'master2', 'TLS1.2')
+    set_ssl_Version(topology_m2.ms["supplier1"], 'supplier1', 'TLS1.0')
+    set_ssl_Version(topology_m2.ms["supplier2"], 'supplier2', 'TLS1.2')
 
-    log.info("##### restart master[12]")
-    topology_m2.ms["master1"].restart(timeout=10)
-    topology_m2.ms["master2"].restart(timeout=10)
+    log.info("##### restart supplier[12]")
+    topology_m2.ms["supplier1"].restart(timeout=10)
+    topology_m2.ms["supplier2"].restart(timeout=10)
 
-    log.info("##### replication from master_1 to master_2 should be ok.")
-    add_entry(topology_m2.ms["master1"], 'master1', 'uid=m1user', 10, 1)
-    log.info("##### replication from master_2 to master_1 should fail.")
-    add_entry(topology_m2.ms["master2"], 'master2', 'uid=m2user', 10, 1)
+    log.info("##### replication from supplier_1 to supplier_2 should be ok.")
+    add_entry(topology_m2.ms["supplier1"], 'supplier1', 'uid=m1user', 10, 1)
+    log.info("##### replication from supplier_2 to supplier_1 should fail.")
+    add_entry(topology_m2.ms["supplier2"], 'supplier2', 'uid=m2user', 10, 1)
 
     time.sleep(10)
 
-    log.info('##### Searching for entries on master1...')
-    entries = topology_m2.ms["master1"].search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
-    assert 11 == len(entries)  # This is supposed to be "1" less than master 2's entry count
+    log.info('##### Searching for entries on supplier1...')
+    entries = topology_m2.ms["supplier1"].search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
+    assert 11 == len(entries)  # This is supposed to be "1" less than supplier 2's entry count
 
-    log.info('##### Searching for entries on master2...')
-    entries = topology_m2.ms["master2"].search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
+    log.info('##### Searching for entries on supplier2...')
+    entries = topology_m2.ms["supplier2"].search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)')
     assert 12 == len(entries)
 
     log.info("Ticket 48784 - PASSED")

+ 9 - 9
dirsrvtests/tests/tickets/ticket48799_test.py

@@ -49,7 +49,7 @@ def _modify_user(server):
 def test_ticket48799(topology_m1c1):
     """Write your replication testcase here.
 
-    To access each DirSrv instance use:  topology_m1c1.ms["master1"], topology_m1c1.ms["master1"]2,
+    To access each DirSrv instance use:  topology_m1c1.ms["supplier1"], topology_m1c1.ms["supplier1"]2,
         ..., topology_m1c1.hub1, ..., topology_m1c1.cs["consumer1"],...
 
     Also, if you need any testcase initialization,
@@ -57,25 +57,25 @@ def test_ticket48799(topology_m1c1):
     """
 
     # Add the new schema element.
-    _add_custom_schema(topology_m1c1.ms["master1"])
+    _add_custom_schema(topology_m1c1.ms["supplier1"])
     _add_custom_schema(topology_m1c1.cs["consumer1"])
 
-    # Add a new user on the master.
-    _create_user(topology_m1c1.ms["master1"])
-    # Modify the user on the master.
-    _modify_user(topology_m1c1.ms["master1"])
+    # Add a new user on the supplier.
+    _create_user(topology_m1c1.ms["supplier1"])
+    # Modify the user on the supplier.
+    _modify_user(topology_m1c1.ms["supplier1"])
 
     # We need to wait for replication here.
     time.sleep(15)
 
-    # Now compare the master vs consumer, and see if the objectClass was dropped.
+    # Now compare the supplier vs consumer, and see if the objectClass was dropped.
 
-    master_entry = topology_m1c1.ms["master1"].search_s("uid=testuser,ou=People,%s" % DEFAULT_SUFFIX, ldap.SCOPE_BASE,
+    supplier_entry = topology_m1c1.ms["supplier1"].search_s("uid=testuser,ou=People,%s" % DEFAULT_SUFFIX, ldap.SCOPE_BASE,
                                                         '(objectclass=*)', ['objectClass'])
     consumer_entry = topology_m1c1.cs["consumer1"].search_s("uid=testuser,ou=People,%s" % DEFAULT_SUFFIX,
                                                             ldap.SCOPE_BASE, '(objectclass=*)', ['objectClass'])
 
-    assert (master_entry == consumer_entry)
+    assert (supplier_entry == consumer_entry)
 
     log.info('Test complete')
 

+ 10 - 10
dirsrvtests/tests/tickets/ticket48916_test.py

@@ -37,7 +37,7 @@ def test_ticket48916(topology_m2):
 
     This is an issue with ID exhaustion in DNA causing a crash.
 
-    To access each DirSrv instance use:  topology_m2.ms["master1"], topology_m2.ms["master2"],
+    To access each DirSrv instance use:  topology_m2.ms["supplier1"], topology_m2.ms["supplier2"],
         ..., topology_m2.hub1, ..., topology_m2.consumer1,...
 
 
@@ -49,13 +49,13 @@ def test_ticket48916(topology_m2):
 
     # Enable the plugin on both servers
 
-    dna_m1 = topology_m2.ms["master1"].plugins.get('Distributed Numeric Assignment Plugin')
-    dna_m2 = topology_m2.ms["master2"].plugins.get('Distributed Numeric Assignment Plugin')
+    dna_m1 = topology_m2.ms["supplier1"].plugins.get('Distributed Numeric Assignment Plugin')
+    dna_m2 = topology_m2.ms["supplier2"].plugins.get('Distributed Numeric Assignment Plugin')
 
     # Configure it
     # Create the container for the ranges to go into.
 
-    topology_m2.ms["master1"].add_s(Entry(
+    topology_m2.ms["supplier1"].add_s(Entry(
         ('ou=Ranges,%s' % DEFAULT_SUFFIX, {
             'objectClass': 'top organizationalUnit'.split(' '),
             'ou': 'Ranges',
@@ -69,7 +69,7 @@ def test_ticket48916(topology_m2):
 
     config_dn = dna_m1.dn
 
-    topology_m2.ms["master1"].add_s(Entry(
+    topology_m2.ms["supplier1"].add_s(Entry(
         ('cn=uids,%s' % config_dn, {
             'objectClass': 'top dnaPluginConfig'.split(' '),
             'cn': 'uids',
@@ -88,7 +88,7 @@ def test_ticket48916(topology_m2):
         })
     ))
 
-    topology_m2.ms["master2"].add_s(Entry(
+    topology_m2.ms["supplier2"].add_s(Entry(
         ('cn=uids,%s' % config_dn, {
             'objectClass': 'top dnaPluginConfig'.split(' '),
             'cn': 'uids',
@@ -111,8 +111,8 @@ def test_ticket48916(topology_m2):
     dna_m2.enable()
 
     # Restart the instances
-    topology_m2.ms["master1"].restart(60)
-    topology_m2.ms["master2"].restart(60)
+    topology_m2.ms["supplier1"].restart(60)
+    topology_m2.ms["supplier2"].restart(60)
 
     # Wait for a replication .....
     time.sleep(40)
@@ -120,10 +120,10 @@ def test_ticket48916(topology_m2):
     # Allocate the 10 members to exhaust
 
     for i in range(1, 11):
-        _create_user(topology_m2.ms["master2"], i)
+        _create_user(topology_m2.ms["supplier2"], i)
 
     # Allocate the 11th
-    _create_user(topology_m2.ms["master2"], 11)
+    _create_user(topology_m2.ms["supplier2"], 11)
 
     log.info('Test PASSED')
 

+ 39 - 39
dirsrvtests/tests/tickets/ticket48944_test.py

@@ -29,9 +29,9 @@ USER_PW = 'Secret123'
 
 
 def _last_login_time(topo, userdn, inst_name, last_login):
-    """Find lastLoginTime attribute value for a given master/consumer"""
+    """Find lastLoginTime attribute value for a given supplier/consumer"""
 
-    if 'master' in inst_name:
+    if 'supplier' in inst_name:
         if (last_login == 'bind_n_check'):
             topo.ms[inst_name].simple_bind_s(userdn, USER_PW)
         topo.ms[inst_name].simple_bind_s(DN_DM, PASSWORD)
@@ -50,7 +50,7 @@ def _enable_plugin(topo, inst_name):
     """Enable account policy plugin and configure required attributes"""
 
     log.info('Enable account policy plugin and configure required attributes')
-    if 'master' in inst_name:
+    if 'supplier' in inst_name:
         log.info('Configure Account policy plugin on {}'.format(inst_name))
         topo.ms[inst_name].simple_bind_s(DN_DM, PASSWORD)
         try:
@@ -87,20 +87,20 @@ def test_ticket48944(topo):
 
     :id: 833be131-f3bf-493e-97c6-3121438a07b1
     :feature: Account Policy Plugin
-    :setup: Two master and two consumer setup
+    :setup: Two supplier and two consumer setup
     :steps: 1. Configure Account policy plugin with alwaysrecordlogin set to yes
-            2. Check if entries are synced across masters and consumers
-            3. Stop all masters and consumers
-            4. Start master1 and bind as user1 to create lastLoginTime attribute
-            5. Start master2 and wait for the sync of lastLoginTime attribute
-            6. Stop master1 and bind as user1 from master2
-            7. Check if lastLoginTime attribute is updated and greater than master1
-            8. Stop master2, start consumer1, consumer2 and then master2
+            2. Check if entries are synced across suppliers and consumers
+            3. Stop all suppliers and consumers
+            4. Start supplier1 and bind as user1 to create lastLoginTime attribute
+            5. Start supplier2 and wait for the sync of lastLoginTime attribute
+            6. Stop supplier1 and bind as user1 from supplier2
+            7. Check if lastLoginTime attribute is updated and greater than supplier1
+            8. Stop supplier2, start consumer1, consumer2 and then supplier2
             9. Check if lastLoginTime attribute is updated on both consumers
             10. Bind as user1 to both consumers and check the value is updated
             11. Check if lastLoginTime attribute is not updated from consumers
-            12. Start master1 and make sure the lastLoginTime attribute is not updated on consumers
-            13. Bind as user1 from master1 and check if all masters and consumers have the same value
+            12. Start supplier1 and make sure the lastLoginTime attribute is not updated on consumers
+            13. Bind as user1 from supplier1 and check if all suppliers and consumers have the same value
             14. Check error logs of consumers for "deletedattribute;deleted" message
     :expectedresults: No accumulation of replica invalid state info on consumers
     """
@@ -108,7 +108,7 @@ def test_ticket48944(topo):
     log.info("Ticket 48944 - On a read only replica invalid state info can accumulate")
     user_name = 'newbzusr'
     tuserdn = 'uid={}1,ou=people,{}'.format(user_name, SUFFIX)
-    inst_list = ['master1', 'master2', 'consumer1', 'consumer2']
+    inst_list = ['supplier1', 'supplier2', 'consumer1', 'consumer2']
     for inst_name in inst_list:
         _enable_plugin(topo, inst_name)
 
@@ -118,7 +118,7 @@ def test_ticket48944(topo):
     for nos in range(10):
         userdn = 'uid={}{},ou=people,{}'.format(user_name, nos, SUFFIX)
         try:
-            topo.ms['master1'].add_s(Entry((userdn, {
+            topo.ms['supplier1'].add_s(Entry((userdn, {
                 'objectclass': 'top person'.split(),
                 'objectclass': 'inetorgperson',
                 'cn': user_name,
@@ -129,10 +129,10 @@ def test_ticket48944(topo):
             log.error('Failed to add {} user: error {}'.format(userdn, e.message['desc']))
             raise e
 
-    log.info('Checking if entries are synced across masters and consumers')
-    entries_m1 = topo.ms['master1'].search_s(SUFFIX, ldap.SCOPE_SUBTREE, 'uid={}*'.format(user_name), ['uid=*'])
+    log.info('Checking if entries are synced across suppliers and consumers')
+    entries_m1 = topo.ms['supplier1'].search_s(SUFFIX, ldap.SCOPE_SUBTREE, 'uid={}*'.format(user_name), ['uid=*'])
     exp_entries = str(entries_m1).count('dn: uid={}*'.format(user_name))
-    entries_m2 = topo.ms['master2'].search_s(SUFFIX, ldap.SCOPE_SUBTREE, 'uid={}*'.format(user_name), ['uid=*'])
+    entries_m2 = topo.ms['supplier2'].search_s(SUFFIX, ldap.SCOPE_SUBTREE, 'uid={}*'.format(user_name), ['uid=*'])
     act_entries = str(entries_m2).count('dn: uid={}*'.format(user_name))
     assert act_entries == exp_entries
     inst_list = ['consumer1', 'consumer2']
@@ -141,37 +141,37 @@ def test_ticket48944(topo):
         act_entries = str(entries_other).count('dn: uid={}*'.format(user_name))
         assert act_entries == exp_entries
 
-    topo.ms['master2'].stop(timeout=10)
-    topo.ms['master1'].stop(timeout=10)
+    topo.ms['supplier2'].stop(timeout=10)
+    topo.ms['supplier1'].stop(timeout=10)
     topo.cs['consumer1'].stop(timeout=10)
     topo.cs['consumer2'].stop(timeout=10)
 
-    topo.ms['master1'].start(timeout=10)
-    lastLogin_m1_1 = _last_login_time(topo, tuserdn, 'master1', 'bind_n_check')
+    topo.ms['supplier1'].start(timeout=10)
+    lastLogin_m1_1 = _last_login_time(topo, tuserdn, 'supplier1', 'bind_n_check')
 
-    log.info('Start master2 to sync lastLoginTime attribute from master1')
-    topo.ms['master2'].start(timeout=10)
+    log.info('Start supplier2 to sync lastLoginTime attribute from supplier1')
+    topo.ms['supplier2'].start(timeout=10)
     time.sleep(5)
-    log.info('Stop master1')
-    topo.ms['master1'].stop(timeout=10)
-    log.info('Bind as user1 to master2 and check if lastLoginTime attribute is greater than master1')
-    lastLogin_m2_1 = _last_login_time(topo, tuserdn, 'master2', 'bind_n_check')
+    log.info('Stop supplier1')
+    topo.ms['supplier1'].stop(timeout=10)
+    log.info('Bind as user1 to supplier2 and check if lastLoginTime attribute is greater than supplier1')
+    lastLogin_m2_1 = _last_login_time(topo, tuserdn, 'supplier2', 'bind_n_check')
     assert lastLogin_m2_1 > lastLogin_m1_1
 
-    log.info('Start all servers except master1')
-    topo.ms['master2'].stop(timeout=10)
+    log.info('Start all servers except supplier1')
+    topo.ms['supplier2'].stop(timeout=10)
     topo.cs['consumer1'].start(timeout=10)
     topo.cs['consumer2'].start(timeout=10)
-    topo.ms['master2'].start(timeout=10)
+    topo.ms['supplier2'].start(timeout=10)
     time.sleep(10)
-    log.info('Check if consumers are updated with lastLoginTime attribute value from master2')
+    log.info('Check if consumers are updated with lastLoginTime attribute value from supplier2')
     lastLogin_c1_1 = _last_login_time(topo, tuserdn, 'consumer1', 'check')
     assert lastLogin_c1_1 == lastLogin_m2_1
 
     lastLogin_c2_1 = _last_login_time(topo, tuserdn, 'consumer2', 'check')
     assert lastLogin_c2_1 == lastLogin_m2_1
 
-    log.info('Check if lastLoginTime update in consumers not synced to master2')
+    log.info('Check if lastLoginTime update in consumers not synced to supplier2')
     lastLogin_c1_2 = _last_login_time(topo, tuserdn, 'consumer1', 'bind_n_check')
     assert lastLogin_c1_2 > lastLogin_m2_1
 
@@ -179,11 +179,11 @@ def test_ticket48944(topo):
     assert lastLogin_c2_2 > lastLogin_m2_1
 
     time.sleep(10)  # Allow replication to kick in
-    lastLogin_m2_2 = _last_login_time(topo, tuserdn, 'master2', 'check')
+    lastLogin_m2_2 = _last_login_time(topo, tuserdn, 'supplier2', 'check')
     assert lastLogin_m2_2 == lastLogin_m2_1
 
-    log.info('Start master1 and check if its updating its older lastLoginTime attribute to consumers')
-    topo.ms['master1'].start(timeout=10)
+    log.info('Start supplier1 and check if its updating its older lastLoginTime attribute to consumers')
+    topo.ms['supplier1'].start(timeout=10)
     time.sleep(10)
     lastLogin_c1_3 = _last_login_time(topo, tuserdn, 'consumer1', 'check')
     assert lastLogin_c1_3 == lastLogin_c1_2
@@ -191,10 +191,10 @@ def test_ticket48944(topo):
     lastLogin_c2_3 = _last_login_time(topo, tuserdn, 'consumer2', 'check')
     assert lastLogin_c2_3 == lastLogin_c2_2
 
-    log.info('Check if lastLoginTime update from master2 is synced to all masters and consumers')
-    lastLogin_m2_3 = _last_login_time(topo, tuserdn, 'master2', 'bind_n_check')
+    log.info('Check if lastLoginTime update from supplier2 is synced to all suppliers and consumers')
+    lastLogin_m2_3 = _last_login_time(topo, tuserdn, 'supplier2', 'bind_n_check')
     time.sleep(10)  # Allow replication to kick in
-    lastLogin_m1_2 = _last_login_time(topo, tuserdn, 'master1', 'check')
+    lastLogin_m1_2 = _last_login_time(topo, tuserdn, 'supplier1', 'check')
     lastLogin_c1_4 = _last_login_time(topo, tuserdn, 'consumer1', 'check')
     lastLogin_c2_4 = _last_login_time(topo, tuserdn, 'consumer2', 'check')
     assert lastLogin_m2_3 == lastLogin_m1_2 == lastLogin_c2_4 == lastLogin_c1_4

+ 4 - 4
dirsrvtests/tests/tickets/ticket49008_test.py

@@ -18,9 +18,9 @@ log = logging.getLogger(__name__)
 
 
 def test_ticket49008(T):
-    A = T.ms['master1']
-    B = T.ms['master2']
-    C = T.ms['master3']
+    A = T.ms['supplier1']
+    B = T.ms['supplier2']
+    C = T.ms['supplier3']
 
     A.enableReplLogging()
     B.enableReplLogging()
@@ -35,7 +35,7 @@ def test_ticket49008(T):
     A.agreement.pause(AtoC)
     C.agreement.pause(CtoA)
 
-    # Enable memberOf on Master B
+    # Enable memberOf on Supplier B
     B.plugins.enable(name=PLUGIN_MEMBER_OF)
 
     # Set the auto OC to an objectclass that does NOT allow memberOf

+ 4 - 4
dirsrvtests/tests/tickets/ticket49020_test.py

@@ -24,9 +24,9 @@ log = logging.getLogger(__name__)
 
 
 def test_ticket49020(T):
-    A = T.ms['master1']
-    B = T.ms['master2']
-    C = T.ms['master3']
+    A = T.ms['supplier1']
+    B = T.ms['supplier2']
+    C = T.ms['supplier3']
 
     A.enableReplLogging()
     B.enableReplLogging()
@@ -44,7 +44,7 @@ def test_ticket49020(T):
     A.add_s(Entry((dn, {'objectclass': "top person".split(),
                         'sn': name,'cn': name})))
 
-    A.agreement.init(DEFAULT_SUFFIX, socket.gethostname(), PORT_MASTER_3)
+    A.agreement.init(DEFAULT_SUFFIX, socket.gethostname(), PORT_SUPPLIER_3)
     time.sleep(5)
     for i in range(1,11):
         name = "userY{}".format(i)

+ 26 - 26
dirsrvtests/tests/tickets/ticket49073_test.py

@@ -3,8 +3,8 @@ from lib389.tasks import *
 from lib389.utils import *
 from lib389.topologies import topology_m2
 
-from lib389._constants import (PLUGIN_MEMBER_OF, DEFAULT_SUFFIX, SUFFIX, HOST_MASTER_2,
-                              PORT_MASTER_2)
+from lib389._constants import (PLUGIN_MEMBER_OF, DEFAULT_SUFFIX, SUFFIX, HOST_SUPPLIER_2,
+                              PORT_SUPPLIER_2)
 
 # Skip on older versions
 pytestmark = [pytest.mark.tier2,
@@ -23,7 +23,7 @@ log = logging.getLogger(__name__)
 def _add_group_with_members(topology_m2):
     # Create group
     try:
-        topology_m2.ms["master1"].add_s(Entry((GROUP_DN,
+        topology_m2.ms["supplier1"].add_s(Entry((GROUP_DN,
                                       {'objectclass': 'top groupofnames'.split(),
                                        'cn': 'group'})))
     except ldap.LDAPError as e:
@@ -35,7 +35,7 @@ def _add_group_with_members(topology_m2):
     for idx in range(1, 5):
         try:
             MEMBER_VAL = ("uid=member%d,%s" % (idx, DEFAULT_SUFFIX))
-            topology_m2.ms["master1"].modify_s(GROUP_DN,
+            topology_m2.ms["supplier1"].modify_s(GROUP_DN,
                                       [(ldap.MOD_ADD,
                                         'member',
                                         MEMBER_VAL)])
@@ -45,12 +45,12 @@ def _add_group_with_members(topology_m2):
             assert False
 
 
-def _check_memberof(master, presence_flag):
+def _check_memberof(supplier, presence_flag):
     # Check that members have memberof attribute on M1
     for idx in range(1, 5):
         try:
             USER_DN = ("uid=member%d,%s" % (idx, DEFAULT_SUFFIX))
-            ent = master.getEntry(USER_DN, ldap.SCOPE_BASE, "(objectclass=*)")
+            ent = supplier.getEntry(USER_DN, ldap.SCOPE_BASE, "(objectclass=*)")
             if presence_flag:
                 assert ent.hasAttr('memberof') and ent.getValue('memberof') == GROUP_DN
             else:
@@ -60,12 +60,12 @@ def _check_memberof(master, presence_flag):
             assert False
 
 
-def _check_entry_exist(master, dn):
+def _check_entry_exist(supplier, dn):
     attempt = 0
     while attempt <= 10:
         try:
             dn
-            ent = master.getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
+            ent = supplier.getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
             break
         except ldap.NO_SUCH_OBJECT:
             attempt = attempt + 1
@@ -79,29 +79,29 @@ def _check_entry_exist(master, dn):
 def test_ticket49073(topology_m2):
     """Write your replication test here.
 
-    To access each DirSrv instance use:  topology_m2.ms["master1"], topology_m2.ms["master2"],
+    To access each DirSrv instance use:  topology_m2.ms["supplier1"], topology_m2.ms["supplier2"],
         ..., topology_m2.hub1, ..., topology_m2.consumer1,...
 
     Also, if you need any testcase initialization,
     please, write additional fixture for that(include finalizer).
     """
-    topology_m2.ms["master1"].plugins.enable(name=PLUGIN_MEMBER_OF)
-    topology_m2.ms["master1"].restart(timeout=10)
-    topology_m2.ms["master2"].plugins.enable(name=PLUGIN_MEMBER_OF)
-    topology_m2.ms["master2"].restart(timeout=10)
+    topology_m2.ms["supplier1"].plugins.enable(name=PLUGIN_MEMBER_OF)
+    topology_m2.ms["supplier1"].restart(timeout=10)
+    topology_m2.ms["supplier2"].plugins.enable(name=PLUGIN_MEMBER_OF)
+    topology_m2.ms["supplier2"].restart(timeout=10)
 
     # Configure fractional to prevent total init to send memberof
-    ents = topology_m2.ms["master1"].agreement.list(suffix=SUFFIX)
+    ents = topology_m2.ms["supplier1"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
     log.info('update %s to add nsDS5ReplicatedAttributeListTotal' % ents[0].dn)
-    topology_m2.ms["master1"].modify_s(ents[0].dn,
+    topology_m2.ms["supplier1"].modify_s(ents[0].dn,
                               [(ldap.MOD_REPLACE,
                                 'nsDS5ReplicatedAttributeListTotal',
                                 '(objectclass=*) $ EXCLUDE '),
                                (ldap.MOD_REPLACE,
                                 'nsDS5ReplicatedAttributeList',
                                 '(objectclass=*) $ EXCLUDE memberOf')])
-    topology_m2.ms["master1"].restart(timeout=10)
+    topology_m2.ms["supplier1"].restart(timeout=10)
 
     #
     #  create some users and a group
@@ -110,33 +110,33 @@ def test_ticket49073(topology_m2):
     for idx in range(1, 5):
         try:
             USER_DN = ("uid=member%d,%s" % (idx, DEFAULT_SUFFIX))
-            topology_m2.ms["master1"].add_s(Entry((USER_DN,
+            topology_m2.ms["supplier1"].add_s(Entry((USER_DN,
                                           {'objectclass': 'top extensibleObject'.split(),
                                            'uid': 'member%d' % (idx)})))
         except ldap.LDAPError as e:
             log.fatal('Failed to add user (%s): error %s' % (USER_DN, e.message['desc']))
             assert False
 
-    _check_entry_exist(topology_m2.ms["master2"], "uid=member4,%s" % (DEFAULT_SUFFIX))
+    _check_entry_exist(topology_m2.ms["supplier2"], "uid=member4,%s" % (DEFAULT_SUFFIX))
     _add_group_with_members(topology_m2)
-    _check_entry_exist(topology_m2.ms["master2"], GROUP_DN)
+    _check_entry_exist(topology_m2.ms["supplier2"], GROUP_DN)
 
     # Check that for regular update memberof was on both side (because plugin is enabled both)
     time.sleep(5)
-    _check_memberof(topology_m2.ms["master1"], True)
-    _check_memberof(topology_m2.ms["master2"], True)
+    _check_memberof(topology_m2.ms["supplier1"], True)
+    _check_memberof(topology_m2.ms["supplier2"], True)
 
     # reinit with fractional definition
-    ents = topology_m2.ms["master1"].agreement.list(suffix=SUFFIX)
+    ents = topology_m2.ms["supplier1"].agreement.list(suffix=SUFFIX)
     assert len(ents) == 1
-    topology_m2.ms["master1"].agreement.init(SUFFIX, HOST_MASTER_2, PORT_MASTER_2)
-    topology_m2.ms["master1"].waitForReplInit(ents[0].dn)
+    topology_m2.ms["supplier1"].agreement.init(SUFFIX, HOST_SUPPLIER_2, PORT_SUPPLIER_2)
+    topology_m2.ms["supplier1"].waitForReplInit(ents[0].dn)
 
     # Check that for total update  memberof was on both side 
     # because memberof is NOT excluded from total init
     time.sleep(5)
-    _check_memberof(topology_m2.ms["master1"], True)
-    _check_memberof(topology_m2.ms["master2"], True)
+    _check_memberof(topology_m2.ms["supplier1"], True)
+    _check_memberof(topology_m2.ms["supplier2"], True)
 
     if DEBUGGING:
         # Add debugging steps(if any)...

+ 13 - 13
dirsrvtests/tests/tickets/ticket49121_test.py

@@ -39,7 +39,7 @@ def test_ticket49121(topology_m2):
     shorter than the real size and caused the Invalid write / server crash.
     """
 
-    utf8file = os.path.join(topology_m2.ms["master1"].getDir(__file__, DATA_DIR), "ticket49121/utf8str.txt")
+    utf8file = os.path.join(topology_m2.ms["supplier1"].getDir(__file__, DATA_DIR), "ticket49121/utf8str.txt")
     utf8obj = codecs.open(utf8file, 'r', 'utf-8')
     utf8strorig = utf8obj.readline()
     utf8str = ensure_bytes(utf8strorig).rstrip(b'\n')
@@ -47,25 +47,25 @@ def test_ticket49121(topology_m2):
     assert (utf8str)
 
     # Get the sbin directory so we know where to replace 'ns-slapd'
-    sbin_dir = topology_m2.ms["master1"].get_sbin_dir()
+    sbin_dir = topology_m2.ms["supplier1"].get_sbin_dir()
     log.info('sbin_dir: %s' % sbin_dir)
 
     # stop M1 to do the next updates
-    topology_m2.ms["master1"].stop(30)
-    topology_m2.ms["master2"].stop(30)
+    topology_m2.ms["supplier1"].stop(30)
+    topology_m2.ms["supplier2"].stop(30)
 
     # wait for the servers shutdown
     time.sleep(5)
 
     # start M1 to do the next updates
-    topology_m2.ms["master1"].start()
-    topology_m2.ms["master2"].start()
+    topology_m2.ms["supplier1"].start()
+    topology_m2.ms["supplier2"].start()
 
     for idx in range(1, 10):
         try:
             USER_DN = 'CN=user%d,ou=People,%s' % (idx, DEFAULT_SUFFIX)
             log.info('adding user %s...' % (USER_DN))
-            topology_m2.ms["master1"].add_s(Entry((USER_DN,
+            topology_m2.ms["supplier1"].add_s(Entry((USER_DN,
                                                    {'objectclass': 'top person extensibleObject'.split(' '),
                                                     'cn': 'user%d' % idx,
                                                     'sn': 'SN%d-%s' % (idx, utf8str)})))
@@ -79,7 +79,7 @@ def test_ticket49121(topology_m2):
             try:
                 USER_DN = 'CN=user%d,ou=People,%s' % (idx, DEFAULT_SUFFIX)
                 log.info('[%d] modify user %s - replacing attrs...' % (i, USER_DN))
-                topology_m2.ms["master1"].modify_s(
+                topology_m2.ms["supplier1"].modify_s(
                     USER_DN, [(ldap.MOD_REPLACE, 'cn', b'user%d' % idx),
                               (ldap.MOD_REPLACE, 'ABCDEFGH_ID', [b'239001ad-06dd-e011-80fa-c00000ad5174',
                                                                  b'240f0878-c552-e411-b0f3-000006040037']),
@@ -185,13 +185,13 @@ def test_ticket49121(topology_m2):
             except ldap.LDAPError as e:
                 log.fatal('Failed to modify user - deleting attrs (%s): error %s' % (USER_DN, e.args[0]['desc']))
 
-    # Stop master2
-    topology_m2.ms["master1"].stop(30)
-    topology_m2.ms["master2"].stop(30)
+    # Stop supplier2
+    topology_m2.ms["supplier1"].stop(30)
+    topology_m2.ms["supplier2"].stop(30)
 
     # start M1 to do the next updates
-    topology_m2.ms["master1"].start()
-    topology_m2.ms["master2"].start()
+    topology_m2.ms["supplier1"].start()
+    topology_m2.ms["supplier2"].start()
 
     log.info('Testcase PASSED')
     if DEBUGGING:

+ 36 - 36
dirsrvtests/tests/tickets/ticket49180_test.py

@@ -24,94 +24,94 @@ logging.getLogger(__name__).setLevel(logging.DEBUG)
 log = logging.getLogger(__name__)
 
 
-def remove_master4_agmts(msg, topology_m4):
-    """Remove all the repl agmts to master4. """
+def remove_supplier4_agmts(msg, topology_m4):
+    """Remove all the repl agmts to supplier4. """
 
-    log.info('%s: remove all the agreements to master 4...' % msg)
+    log.info('%s: remove all the agreements to supplier 4...' % msg)
     for num in range(1, 4):
         try:
-            topology_m4.ms["master{}".format(num)].agreement.delete(DEFAULT_SUFFIX,
-                                                                    topology_m4.ms["master4"].host,
-                                                                    topology_m4.ms["master4"].port)
+            topology_m4.ms["supplier{}".format(num)].agreement.delete(DEFAULT_SUFFIX,
+                                                                    topology_m4.ms["supplier4"].host,
+                                                                    topology_m4.ms["supplier4"].port)
         except ldap.LDAPError as e:
             log.fatal('{}: Failed to delete agmt(m{} -> m4), error: {}'.format(msg, num, str(e)))
             assert False
 
 
-def restore_master4(topology_m4):
-    """In our tests will always be removing master 4, so we need a common
+def restore_supplier4(topology_m4):
+    """In our tests will always be removing supplier 4, so we need a common
     way to restore it for another test
     """
 
-    log.info('Restoring master 4...')
+    log.info('Restoring supplier 4...')
 
-    # Enable replication on master 4
-    M4 = topology_m4.ms["master4"]
-    M1 = topology_m4.ms["master1"]
+    # Enable replication on supplier 4
+    M4 = topology_m4.ms["supplier4"]
+    M1 = topology_m4.ms["supplier1"]
     repl = ReplicationManager(SUFFIX)
-    repl.join_master(M1, M4)
+    repl.join_supplier(M1, M4)
     repl.ensure_agreement(M4, M1)
     repl.ensure_agreement(M1, M4)
 
     # Test Replication is working
     for num in range(2, 5):
-        if topology_m4.ms["master1"].testReplication(DEFAULT_SUFFIX, topology_m4.ms["master{}".format(num)]):
+        if topology_m4.ms["supplier1"].testReplication(DEFAULT_SUFFIX, topology_m4.ms["supplier{}".format(num)]):
             log.info('Replication is working m1 -> m{}.'.format(num))
         else:
-            log.fatal('restore_master4: Replication is not working from m1 -> m{}.'.format(num))
+            log.fatal('restore_supplier4: Replication is not working from m1 -> m{}.'.format(num))
             assert False
         time.sleep(1)
 
-    # Check replication is working from master 4 to master1...
-    if topology_m4.ms["master4"].testReplication(DEFAULT_SUFFIX, topology_m4.ms["master1"]):
+    # Check replication is working from supplier 4 to supplier1...
+    if topology_m4.ms["supplier4"].testReplication(DEFAULT_SUFFIX, topology_m4.ms["supplier1"]):
         log.info('Replication is working m4 -> m1.')
     else:
-        log.fatal('restore_master4: Replication is not working from m4 -> 1.')
+        log.fatal('restore_supplier4: Replication is not working from m4 -> 1.')
         assert False
     time.sleep(5)
 
-    log.info('Master 4 has been successfully restored.')
+    log.info('Supplier 4 has been successfully restored.')
 
 
 def test_ticket49180(topology_m4):
 
     log.info('Running test_ticket49180...')
 
-    log.info('Check that replication works properly on all masters')
-    agmt_nums = {"master1": ("2", "3", "4"),
-                 "master2": ("1", "3", "4"),
-                 "master3": ("1", "2", "4"),
-                 "master4": ("1", "2", "3")}
+    log.info('Check that replication works properly on all suppliers')
+    agmt_nums = {"supplier1": ("2", "3", "4"),
+                 "supplier2": ("1", "3", "4"),
+                 "supplier3": ("1", "2", "4"),
+                 "supplier4": ("1", "2", "3")}
 
     for inst_name, agmts in agmt_nums.items():
         for num in agmts:
-            if not topology_m4.ms[inst_name].testReplication(DEFAULT_SUFFIX, topology_m4.ms["master{}".format(num)]):
+            if not topology_m4.ms[inst_name].testReplication(DEFAULT_SUFFIX, topology_m4.ms["supplier{}".format(num)]):
                 log.fatal(
-                    'test_replication: Replication is not working between {} and master {}.'.format(inst_name,
+                    'test_replication: Replication is not working between {} and supplier {}.'.format(inst_name,
                                                                                                     num))
                 assert False
 
-    # Disable master 4
-    log.info('test_clean: disable master 4...')
-    topology_m4.ms["master4"].replica.disableReplication(DEFAULT_SUFFIX)
+    # Disable supplier 4
+    log.info('test_clean: disable supplier 4...')
+    topology_m4.ms["supplier4"].replica.disableReplication(DEFAULT_SUFFIX)
 
-    # Remove the agreements from the other masters that point to master 4
-    remove_master4_agmts("test_clean", topology_m4)
+    # Remove the agreements from the other suppliers that point to supplier 4
+    remove_supplier4_agmts("test_clean", topology_m4)
 
-    # Cleanup - restore master 4
-    restore_master4(topology_m4)
+    # Cleanup - restore supplier 4
+    restore_supplier4(topology_m4)
 
-    attr_errors = os.popen('egrep "attrlist_replace" %s  | wc -l' % topology_m4.ms["master1"].errlog)
+    attr_errors = os.popen('egrep "attrlist_replace" %s  | wc -l' % topology_m4.ms["supplier1"].errlog)
     ecount = int(attr_errors.readline().rstrip())
     log.info("Errors found on m1: %d" % ecount)
     assert (ecount == 0)
 
-    attr_errors = os.popen('egrep "attrlist_replace" %s  | wc -l' % topology_m4.ms["master2"].errlog)
+    attr_errors = os.popen('egrep "attrlist_replace" %s  | wc -l' % topology_m4.ms["supplier2"].errlog)
     ecount = int(attr_errors.readline().rstrip())
     log.info("Errors found on m2: %d" % ecount)
     assert (ecount == 0)
 
-    attr_errors = os.popen('egrep "attrlist_replace" %s  | wc -l' % topology_m4.ms["master3"].errlog)
+    attr_errors = os.popen('egrep "attrlist_replace" %s  | wc -l' % topology_m4.ms["supplier3"].errlog)
     ecount = int(attr_errors.readline().rstrip())
     log.info("Errors found on m3: %d" % ecount)
     assert (ecount == 0)

+ 6 - 6
dirsrvtests/tests/tickets/ticket49287_test.py

@@ -49,12 +49,12 @@ def _wait_for_sync(s1, s2, testbase, final_db):
     _check_entry_exist(s1, dn2, 10, 5)
 
 
-def _check_entry_exist(master, dn, loops=10, wait=1):
+def _check_entry_exist(supplier, dn, loops=10, wait=1):
     attempt = 0
     while attempt <= loops:
         try:
             dn
-            ent = master.getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
+            ent = supplier.getEntry(dn, ldap.SCOPE_BASE, "(objectclass=*)")
             break
         except ldap.NO_SUCH_OBJECT:
             attempt = attempt + 1
@@ -209,8 +209,8 @@ def create_backend(s1, s2, beSuffix, beName):
 
 def replicate_backend(s1, s2, beSuffix):
     repl = ReplicationManager(beSuffix)
-    repl.create_first_master(s1)
-    repl.join_master(s1, s2)
+    repl.create_first_supplier(s1)
+    repl.join_supplier(s1, s2)
     repl.ensure_agreement(s1, s2)
     repl.ensure_agreement(s2, s2)
     # agreement m2_m1_agmt is not needed... :p
@@ -262,8 +262,8 @@ def test_ticket49287(topology_m2):
     """
 
     # return
-    M1 = topology_m2.ms["master1"]
-    M2 = topology_m2.ms["master2"]
+    M1 = topology_m2.ms["supplier1"]
+    M2 = topology_m2.ms["supplier2"]
 
     config_memberof(M1)
     config_memberof(M2)

+ 1 - 1
dirsrvtests/tests/tickets/ticket49386_test.py

@@ -136,7 +136,7 @@ def test_ticket49386(topo):
     # Topology for suites are predefined in lib389/topologies.py.
 
     # If you need host, port or any other data about instance,
-    # Please, use the instance object attributes for that (for example, topo.ms["master1"].serverid)
+    # Please, use the instance object attributes for that (for example, topo.ms["supplier1"].serverid)
 
     if DEBUGGING:
         # Add debugging steps(if any)...

+ 2 - 2
dirsrvtests/tests/tickets/ticket49412_test.py

@@ -35,7 +35,7 @@ def test_ticket49412(topo):
         2. For each test step
     """
 
-    M1 = topo.ms["master1"]
+    M1 = topo.ms["supplier1"]
 
     # wrong call with invalid value (should be str(60)
     # that create replace with NULL value
@@ -52,7 +52,7 @@ def test_ticket49412(topo):
     # Topology for suites are predefined in lib389/topologies.py.
 
     # If you need host, port or any other data about instance,
-    # Please, use the instance object attributes for that (for example, topo.ms["master1"].serverid)
+    # Please, use the instance object attributes for that (for example, topo.ms["supplier1"].serverid)
 
     if DEBUGGING:
         # Add debugging steps(if any)...

+ 4 - 4
dirsrvtests/tests/tickets/ticket49460_test.py

@@ -70,9 +70,9 @@ def test_ticket_49460(topo):
         1. No report of failure when the RUV is updated
     """
 
-    M1 = topo.ms["master1"]
-    M2 = topo.ms["master2"]
-    M3 = topo.ms["master3"]
+    M1 = topo.ms["supplier1"]
+    M2 = topo.ms["supplier2"]
+    M3 = topo.ms["supplier3"]
 
     for i in (M1, M2, M3):
         i.config.loglevel(vals=[256 + 4], service='access')
@@ -102,7 +102,7 @@ def test_ticket_49460(topo):
     # Topology for suites are predefined in lib389/topologies.py.
 
     # If you need host, port or any other data about instance,
-    # Please, use the instance object attributes for that (for example, topo.ms["master1"].serverid)
+    # Please, use the instance object attributes for that (for example, topo.ms["supplier1"].serverid)
 
     if DEBUGGING:
         # Add debugging steps(if any)...

+ 4 - 4
dirsrvtests/tests/tickets/ticket49463_test.py

@@ -81,10 +81,10 @@ def test_ticket_49463(topo):
     """
 
     # Step 1 - Configure fractional (skip telephonenumber) replication
-    M1 = topo.ms["master1"]
-    M2 = topo.ms["master2"]
-    M3 = topo.ms["master3"]
-    M4 = topo.ms["master4"]
+    M1 = topo.ms["supplier1"]
+    M2 = topo.ms["supplier2"]
+    M3 = topo.ms["supplier3"]
+    M4 = topo.ms["supplier4"]
     repl = ReplicationManager(DEFAULT_SUFFIX)
     fractional_server_to_replica(M1, M2)
     fractional_server_to_replica(M1, M3)

+ 1 - 1
dirsrvtests/tests/tickets/ticket49471_test.py

@@ -52,7 +52,7 @@ def test_ticket49471(topo):
     # Topology for suites are predefined in lib389/topologies.py.
 
     # If you need host, port or any other data about instance,
-    # Please, use the instance object attributes for that (for example, topo.ms["master1"].serverid)
+    # Please, use the instance object attributes for that (for example, topo.ms["supplier1"].serverid)
 
     S1 = topo.standalone
     add_user(S1, 1)

+ 1 - 1
dirsrvtests/tests/tickets/ticket49540_test.py

@@ -120,7 +120,7 @@ def test_ticket49540(topo):
     # Topology for suites are predefined in lib389/topologies.py.
 
     # If you need host, port or any other data about instance,
-    # Please, use the instance object attributes for that (for example, topo.ms["master1"].serverid)
+    # Please, use the instance object attributes for that (for example, topo.ms["supplier1"].serverid)
 
     if DEBUGGING:
         # Add debugging steps(if any)...

+ 2 - 2
dirsrvtests/tests/tickets/ticket49623_2_test.py

@@ -33,7 +33,7 @@ def test_modrdn_loop(topology_m1):
 
     :customerscenario: True
 
-    :setup: Single master instance
+    :setup: Single supplier instance
 
     :steps: 1. Add an entry with RDN start rdn
             2. Rename the entry to rdn change
@@ -44,7 +44,7 @@ def test_modrdn_loop(topology_m1):
             1. No error messages
     """
 
-    topo = topology_m1.ms['master1']
+    topo = topology_m1.ms['supplier1']
     TEST_ENTRY_RDN_START = 'start'
     TEST_ENTRY_RDN_CHANGE = 'change'
     TEST_ENTRY_NAME = 'tuser'

File diff ditekan karena terlalu besar
+ 173 - 173
dirsrvtests/tests/tickets/ticket49658_test.py


+ 1 - 1
dirsrvtests/tests/tickets/ticket50078_test.py

@@ -21,7 +21,7 @@ def test_ticket50078(topology_m1h1c1):
     a hub or consumer.
     """
 
-    M1 = topology_m1h1c1.ms["master1"]
+    M1 = topology_m1h1c1.ms["supplier1"]
     H1 = topology_m1h1c1.hs["hub1"]
     C1 = topology_m1h1c1.cs["consumer1"]
     #

+ 1 - 1
dirsrvtests/tests/tickets/ticket50232_test.py

@@ -145,7 +145,7 @@ def test_ticket50232_reverse(topology_st):
     #
     log.info('Setting up replication...')
     repl = ReplicationManager(DEFAULT_SUFFIX)
-    # repl.create_first_master(topology_st.standalone)
+    # repl.create_first_supplier(topology_st.standalone)
     #
     # enable dynamic plugins, memberof and retro cl plugin
     #

+ 5 - 5
ldap/servers/slapd/tools/ldclt/ldapfct.c

@@ -1808,7 +1808,7 @@ createMissingNodes(
         if (incrementNbOpers(tttctx) < 0)
             return (-1);
 #ifdef SOLARIS /*JLS 14-11-00*/
-        if (mctx.slavesNb > 0)
+        if (mctx.workersNb > 0)
             if (opAdd(tttctx, LDAP_REQ_ADD, nodeDN, attrs, NULL, NULL) < 0)
                 return (-1);
 #endif     /*JLS 14-11-00*/
@@ -2003,7 +2003,7 @@ getPending(
        * Ok, the operation is well performed !
        * Maybe we are running in check mode ?
        */
-                    if (mctx.slavesNb == 0) {
+                    if (mctx.workersNb == 0) {
                         if (msgIdDel(tttctx, msgid, 1) < 0)
                             return (-1);
                     }
@@ -2148,7 +2148,7 @@ doRename(
                 if (incrementNbOpers(tttctx) < 0) /* Memorize operation */
                     return (-1);
 #ifdef SOLARIS /*JLS 14-11-00*/
-                if (mctx.slavesNb > 0)
+                if (mctx.workersNb > 0)
                     if (opAdd(tttctx, LDAP_REQ_MODRDN, oldDn, NULL,
                               tttctx->bufFilter, tttctx->bufBaseDN) < 0)
                         return (-1);
@@ -2521,7 +2521,7 @@ doAddEntry(
                 if (incrementNbOpers(tttctx) < 0) /* Memorize operation */
                     return (-1);
 #ifdef SOLARIS /*JLS 14-11-00*/
-                if (mctx.slavesNb > 0)
+                if (mctx.workersNb > 0)
                     if (opAdd(tttctx, LDAP_REQ_ADD, newDn, attrs, NULL, NULL) < 0)
                         return (-1);
 #endif /*JLS 14-11-00*/
@@ -2979,7 +2979,7 @@ doDeleteEntry(
             if (incrementNbOpers(tttctx) < 0) /* Memorize operation */
                 return (-1);
 #ifdef SOLARIS /*JLS 14-11-00*/
-            if (mctx.slavesNb > 0)
+            if (mctx.workersNb > 0)
                 if (opAdd(tttctx, LDAP_REQ_DELETE, delDn, NULL, NULL, NULL) < 0)
                     return (-1);
 #endif /*JLS 14-11-00*/

+ 30 - 30
ldap/servers/slapd/tools/ldclt/ldclt.c

@@ -36,7 +36,7 @@
 #include <string.h>                              /* strerror(), etc... */
 #include <errno.h>        /* errno, etc... */    /*JLS 06-03-00*/
 #include <fcntl.h>        /* O_RDONLY, etc... */ /*JLS 02-04-01*/
-#include <time.h>        /* ctime(), etc... */   /*JLS 18-08-00*/
+#include <time.h>         /* ctime(), etc... */  /*JLS 18-08-00*/
 #include <lber.h>                                /* ldap C-API BER decl. */
 #include <ldap.h>                                /* ldap C-API decl. */
 #ifdef LDAP_H_FROM_QA_WKA
@@ -62,8 +62,8 @@
  */
 main_context mctx;                /* Main context */
 thread_context tctx[MAX_THREADS]; /* Threads contextes */
-check_context cctx[MAX_SLAVES];   /* Check threads contextes */
-int masterPort = 16000;
+check_context cctx[MAX_WORKERS];  /* Check threads contextes */
+int supplierPort = 16000;
 
 extern char *ldcltVersion; /* ldclt version */ /*JLS 18-08-00*/
 
@@ -295,8 +295,8 @@ runThem(void)
     /*
    * Maybe create the check operation threads.
    */
-    if (mctx.slavesNb > 0) {
-        for (i = 0; i < mctx.slavesNb; i++) {
+    if (mctx.workersNb > 0) {
+        for (i = 0; i < mctx.workersNb; i++) {
             if (mctx.mode & VERY_VERBOSE)
                 printf("ldclt[%d]: Creating thread C%03d\n", mctx.pid, i);
 
@@ -307,7 +307,7 @@ runThem(void)
             cctx[i].status = DEAD;
             cctx[i].thrdNum = i;
             cctx[i].calls = 0;
-            cctx[i].slaveName = NULL;
+            cctx[i].workerName = NULL;
             cctx[i].nbEarly = 0;
             cctx[i].nbLate = 0;
             cctx[i].nbLostOp = 0;
@@ -593,10 +593,10 @@ monitorThem(void)
    * Let's wait for the consumers (aka ckeck threads)
    */
     allDead = 0;
-    if (mctx.slavesNb > 0)
+    if (mctx.workersNb > 0)
         while (!allDead) {
             allDead = 1;
-            for (i = 0; i < mctx.slavesNb; i++)
+            for (i = 0; i < mctx.workersNb; i++)
                 if (cctx[i].status != DEAD)
                     allDead = 0;
             if (!allDead)
@@ -710,60 +710,60 @@ printGlobalStatistics(void)
     /*
    * Check threads statistics
    */
-    if (mctx.slavesNb > 0) {
-        if (!(mctx.slaveConn))
-            printf("ldclt[%d]: Problem: slave never connected !!!!\n", mctx.pid);
+    if (mctx.workersNb > 0) {
+        if (!(mctx.workerConn))
+            printf("ldclt[%d]: Problem: worker never connected !!!!\n", mctx.pid);
         else {
             total = 0;
-            for (i = 0; i < mctx.slavesNb; i++)
+            for (i = 0; i < mctx.workersNb; i++)
                 total += cctx[i].nbOpRecv;
             printf("ldclt[%d]: Global number of replication operations received: %5d\n",
                    mctx.pid, total);
 
             total = 0;
-            for (i = 0; i < mctx.slavesNb; i++)
+            for (i = 0; i < mctx.workersNb; i++)
                 total += cctx[i].nbEarly;
             printf("ldclt[%d]: Global number of early replication:               %5d\n",
                    mctx.pid, total);
 
             total = 0;
-            for (i = 0; i < mctx.slavesNb; i++)
+            for (i = 0; i < mctx.workersNb; i++)
                 total += cctx[i].nbLate;
             printf("ldclt[%d]: Global number of late replication:                %5d\n",
                    mctx.pid, total);
 
             total = 0;
-            for (i = 0; i < mctx.slavesNb; i++)
+            for (i = 0; i < mctx.workersNb; i++)
                 total += cctx[i].nbLostOp;
             printf("ldclt[%d]: Global number of lost operation:                  %5d\n",
                    mctx.pid, total);
 
             total = 0;
-            for (i = 0; i < mctx.slavesNb; i++)
+            for (i = 0; i < mctx.workersNb; i++)
                 total += cctx[i].nbNotOnList;
             printf("ldclt[%d]: Global number of not on list replication op.:     %5d\n",
                    mctx.pid, total);
 
             total = 0;
-            for (i = 0; i < mctx.slavesNb; i++)
+            for (i = 0; i < mctx.workersNb; i++)
                 total += cctx[i].nbRepFail32;
             printf("ldclt[%d]: Global number of repl failed LDAP_NO_SUCH_OBJECT: %5d\n",
                    mctx.pid, total);
 
             total = 0;
-            for (i = 0; i < mctx.slavesNb; i++)
+            for (i = 0; i < mctx.workersNb; i++)
                 total += cctx[i].nbRepFail68;
             printf("ldclt[%d]: Global number of repl failed LDAP_ALREADY_EXISTS: %5d\n",
                    mctx.pid, total);
 
             total = 0;
-            for (i = 0; i < mctx.slavesNb; i++)
+            for (i = 0; i < mctx.workersNb; i++)
                 total += cctx[i].nbRepFailX;
             printf("ldclt[%d]: Global number of repl failed other error:         %5d\n",
                    mctx.pid, total);
 
             total = 0;
-            for (i = 0; i < mctx.slavesNb; i++)
+            for (i = 0; i < mctx.workersNb; i++)
                 total += cctx[i].nbStillOnQ;
             printf("ldclt[%d]: Global number of repl still on Queue:             %5d\n",
                    mctx.pid, total);
@@ -1287,7 +1287,7 @@ basicInit(void)
    * Maybe we should initiate the operation list mutex and other check-related
    * thing...
    */
-    if (mctx.slavesNb > 0) {
+    if (mctx.workersNb > 0) {
         /*
      * Initiates the mutex
      */
@@ -2281,8 +2281,8 @@ main(
     mctx.sasl_secprops = NULL;
     mctx.sasl_username = NULL;
     mctx.scope = DEF_SCOPE;
-    mctx.slaveConn = 0;
-    mctx.slavesNb = 0;
+    mctx.workerConn = 0;
+    mctx.workersNb = 0;
     mctx.srch_nentries = -1;
     mctx.timeout = DEF_TIMEOUT;
     mctx.totalReq = -1;
@@ -2364,7 +2364,7 @@ main(
             mctx.port = atoi(optarg);
             break;
         case 'P':
-            masterPort = atoi(optarg);
+            supplierPort = atoi(optarg);
             break;
         case 'q':
             mctx.mode |= QUIET;
@@ -2387,8 +2387,8 @@ main(
             mctx.timeout = atoi(optarg);
             break;
         case 'S':
-            mctx.slaves[mctx.slavesNb] = optarg;
-            mctx.slavesNb++;
+            mctx.workers[mctx.workersNb] = optarg;
+            mctx.workersNb++;
             break;
         case 'T':
             mctx.totalReq = atoi(optarg);
@@ -2823,10 +2823,10 @@ main(
             printf("Ignore error       = %d (%s)\n",
                    mctx.ignErr[i], my_ldap_err2string(mctx.ignErr[i]));
         fflush(stdout);
-        if (mctx.slavesNb > 0) {
-            printf("Slave(s) to check  =");
-            for (size_t i = 0; i < mctx.slavesNb; i++)
-                printf(" %s", mctx.slaves[i]);
+        if (mctx.workersNb > 0) {
+            printf("Workers(s) to check  =");
+            for (size_t i = 0; i < mctx.workersNb; i++)
+                printf(" %s", mctx.workers[i]);
             printf("\n");
         }
     }

+ 9 - 9
ldap/servers/slapd/tools/ldclt/ldclt.h

@@ -37,11 +37,11 @@
 #define MAX_ATTRIBS 40 /* Max number of attributes */   /*JLS 28-03-01*/
 #define MAX_DN_LENGTH 1024                              /* Max length for a DN */
 #define MAX_ERROR_NB 0x7b                               /* Max ldap err number + 1 */
-#define NEGATIVE_MAX_ERROR_NB (LDAP_X_CONNECTING - 1) /* Mininum ldap err number */
-#define MAX_IGN_ERRORS 20                            /* Max errors ignored */
-#define MAX_FILTER 4096                               /* Max filters length */
-#define MAX_THREADS 1000 /* Max number of threads */ /*JLS 21-11-00*/
-#define MAX_SLAVES 20                                /* Max number of slaves */
+#define NEGATIVE_MAX_ERROR_NB (LDAP_X_CONNECTING - 1)   /* Mininum ldap err number */
+#define MAX_IGN_ERRORS 20                               /* Max errors ignored */
+#define MAX_FILTER 4096                                 /* Max filters length */
+#define MAX_THREADS 1000                                /* Max number of threads */
+#define MAX_WORKERS 20                                  /* Max number of workers */
 
 #define DEF_IMAGES_PATH "../../data/ldclt/images"
 #define DEF_REFERRAL REFERRAL_ON     /*JLS 08-03-01*/
@@ -428,9 +428,9 @@ typedef struct main_context
     char *sasl_secprops;
     char *sasl_username;
     int scope;                  /* Searches scope */
-    int slaveConn;              /* Slave has connected */
-    char *slaves[MAX_SLAVES];   /* Slaves list */
-    int slavesNb;               /* Number of slaves */
+    int workerConn;              /* Worker has connected */
+    char *workers[MAX_WORKERS];   /* Worker list */
+    int workersNb;               /* Number of workers */
     int srch_nentries;          /* number of entries that must be returned by each search op */
     int timeout;                /* LDAP op. t.o. */
     struct timeval timeval;     /* Timeval structure */
@@ -523,7 +523,7 @@ typedef struct check_context
 {
     oper *headListOp;                /* Head list of operation */
     thoper *dcOper;                  /* Double check operation list */
-    char *slaveName;                 /* Name of the slave */
+    char *workerName;                 /* Name of the worker */
     int sockfd;                      /* Socket fd after accept() */
     int status;                      /* Status */
     int thrdNum;                     /* Thread number */

+ 8 - 8
ldap/servers/slapd/tools/ldclt/ldclt.man

@@ -1,6 +1,6 @@
 #ident "ldclt @(#)ldclt.man	1.56 01/05/04"
 
-ldclt(1)                        SUNQAldap                ldclt(1)
+ldclt(1)                                             ldclt(1)
 
 NAME
         ldclt - ldap client for stress and reliability testing.
@@ -14,7 +14,7 @@ SYNOPSIS
               [-a <max pending>] [-E <max errors>]
               [-I <ignored error>] [-T <total>]
               [-f <filter>] [-s <scope>]
-	      [-S <slave>] [-P<master port>]
+	      [-S <worker>] [-P<supplier port>]
 	      [-W <waitsec>] [-Z <certfile>]
 
 AVAILABILITY
@@ -627,9 +627,9 @@ OPTIONS
         -p <port number>
                 Server port. Default port 389.
 
-	-P <master port number>
+	-P <supplier port number>
 		This  is  the  port used to communicate with the
-		slaves in order to check the replication.
+		workers in order to check the replication.
 
         -q      Quiet mode. When used, the errors of the option
                 -I are not printed.  The messages "Intermediate
@@ -651,12 +651,12 @@ OPTIONS
 		Valid only for searches.  May be base,  subtree
 		or one. The default value is subtree.
 
-	-S <slave>
-		The  slave host given in argument will be under
+	-S <worker>
+		The  worker host given in argument will be under
 		monitoring   to   ensure  that  the  operations
 		performed  on  the  server are well realized on
-		the given  slave.  This option may be used more
-		than one time to indicate more than one slave.
+		the given  worker.  This option may be used more
+		than one time to indicate more than one worker.
 
         -t <seconds>
                 LDAP operations timeout. Default 30 seconds.

+ 3 - 3
ldap/servers/slapd/tools/ldclt/ldclt.use

@@ -6,7 +6,7 @@ usage: ldclt [-qQvV] [-E <max errors>]
 	     [-I <err number>] [-T <total>]
 	     [-r <low> -R <high>]
 	     [-f <filter>] [-s <scope>]
-	     [-S <slave>] [-P<master port>]
+	     [-S <worker>] [-P<supplier port>]
 	     [-W <waitsec>] [-Z <certfile>]
 
 	This tool is a ldap client targeted to validate the reliability of
@@ -70,13 +70,13 @@ usage: ldclt [-qQvV] [-E <max errors>]
 	 -N  Number of samples (10 seconds each).  Default infinite.
 	 -o  SASL Options.
 	 -p  Server port.                          Default 389.
-	 -P  Master port (to check replication).   Default 16000.
+	 -P  Supplier port (to check replication).   Default 16000.
 	 -q  Quiet mode. See option -I.
 	 -Q  Super quiet mode.
 	 -r  Range's low value.
 	 -R  Range's high value.
 	 -s  Scope. May be base, subtree or one.   Default subtree.
-	 -S  Slave to check.
+	 -S  Workers to check.
 	 -t  LDAP operations timeout. Default 30 seconds.
 	 -T  Total number of operations per thread.	   Default infinite.
 	 -v  Verbose.

Beberapa file tidak ditampilkan karena terlalu banyak file yang berubah dalam diff ini