parallelECLAT

class PAMI.frequentPattern.pyspark.parallelECLAT.parallelECLAT(iFile, minSup, numWorkers, sep='\t')[source]

Bases: _frequentPatterns

Description:

ParallelEclat is an algorithm to discover frequent patterns in a transactional database. This program employs parallel apriori property (or downward closure property) to reduce the search space effectively.

Reference:

Parameters:
  • iFile – str : Name of the Input file to mine complete set of frequent patterns

  • oFile – str : Name of the output file to store complete set of frequent patterns

  • minSup – int : The user can specify minSup either in count or proportion of database size. If the program detects the data type of minSup is integer, then it treats minSup is expressed in count. Otherwise, it will be treated as float.

  • sep – str : This variable is used to distinguish items from one another in a transaction. The default seperator is tab space. However, the users can override their default separator.

  • numPartitions – int : The number of partitions. On each worker node, an executor process is started and this process performs processing.The processing unit of worker node is partition

Attributes:
startTimefloat

To record the start time of the mining process

endTimefloat

To record the completion time of the mining process

finalPatternsdict

Storing the complete set of patterns in a dictionary variable

memoryUSSfloat

To store the total amount of USS memory consumed by the program

memoryRSSfloat

To store the total amount of RSS memory consumed by the program

lnoint

the number of transactions

Methods to execute code on terminal

Format:

(.venv) $ python3 parallelECLAT.py <inputFile> <outputFile> <minSup> <numWorkers>

Example Usage:

(.venv) $ python3 parallelECLAT.py sampleDB.txt patterns.txt 10.0 3

Note

minSup will be considered in percentage of database transactions

Importing this algorithm into a python program

import PAMI.frequentPattern.pyspark.parallelECLAT as alg

obj = alg.parallelECLAT(iFile, minSup, numWorkers)

obj.mine()

frequentPatterns = obj.getPatterns()

print("Total number of Frequent Patterns:", len(frequentPatterns))

obj.save(oFile)

Df = obj.getPatternInDataFrame()

memUSS = obj.getMemoryUSS()

print("Total Memory in USS:", memUSS)

memRSS = obj.getMemoryRSS()

print("Total Memory in RSS", memRSS)

run = obj.getRuntime()

print("Total ExecutionTime in seconds:", run)

Credits:

The complete program was written by Yudai Masu under the supervision of Professor Rage Uday Kiran.

getMemoryRSS()[source]

Total amount of RSS memory consumed by the mining process will be retrieved from this function :return: returning RSS memory consumed by the mining process :rtype: float

getMemoryUSS()[source]

Total amount of USS memory consumed by the mining process will be retrieved from this function :return: returning USS memory consumed by the mining process :rtype: float

getPatterns()[source]

Function to send the set of frequent patterns after completion of the mining process :return: returning frequent patterns :rtype: dict

getPatternsAsDataFrame()[source]

Storing final frequent patterns in a dataframe :return: returning frequent patterns in a dataframe :rtype: pd.DataFrame

getRuntime()[source]

Calculating the total amount of runtime taken by the mining process :return: returning total amount of runtime taken by the mining process :rtype: float

mine()[source]

Frequent pattern mining process will start from here

printResults()[source]

This function is used to print the results

save(outFile)[source]

Complete set of frequent patterns will be loaded in to an output file :param outFile: name of the output file :type outFile: csvfile

startMine()[source]

Frequent pattern mining process will start from here