PAMI.sequentialPatternMining.basic package
Submodules
PAMI.sequentialPatternMining.basic.SPADE module
- class PAMI.sequentialPatternMining.basic.SPADE.SPADE(iFile, minSup, sep='\t')[source]
Bases:
_sequentialPatterns
- Description:
SPADE is one of the fundamental algorithm to discover sequential frequent patterns in a transactional database.
This program employs SPADE property (or downward closure property) to reduce the search space effectively.
This algorithm employs breadth-first search technique when 1-2 length patterns and depth-first serch when above 3 length patterns to find the complete set of frequent patterns in a transactional database.
- Reference:
Mohammed J. Zaki. 2001. SPADE: An Efficient Algorithm for Mining Frequent Sequences. Mach. Learn. 42, 1-2 (January 2001), 31-60. DOI=10.1023/A:1007652502315 http://dx.doi.org/10.1023/A:1007652502315
- Parameters:
iFile – str : Name of the Input file to mine complete set of Sequential frequent patterns
oFile – str : Name of the output file to store complete set of Sequential frequent patterns
minSup – float or int or str : minSup measure constraints the minimum number of transactions in a database where a pattern must appear Example: minSup=10 will be treated as integer, while minSup=10.0 will be treated as float
sep – str : This variable is used to distinguish items from one another in a transaction. The default seperator is tab space. However, the users can override their default separator.
- Attributes:
- iFilestr
Input file name or path of the input file
- oFilestr
Name of the output file or the path of output file
- minSup: float or int or str
The user can specify minSup either in count or proportion of database size. If the program detects the data type of minSup is integer, then it treats minSup is expressed in count. Otherwise, it will be treated as float. Example: minSup=10 will be treated as integer, while minSup=10.0 will be treated as float
- sepstr
This variable is used to distinguish items from one another in a transaction. The default seperator is tab space or . However, the users can override their default separator.
- startTime:float
To record the start time of the mining process
- endTime:float
To record the completion time of the mining process
- finalPatterns: dict
Storing the complete set of patterns in a dictionary variable
- memoryUSSfloat
To store the total amount of USS memory consumed by the program
- memoryRSSfloat
To store the total amount of RSS memory consumed by the program
- Databaselist
To store the transactions of a database in list
- _xLenDatabase: dict
To store the datas in different sequence separated by sequence, rownumber, length.
- _xLenDatabaseSamedict
To store the datas in same sequence separated by sequence, rownumber, length.
- Methods:
- mine()
Mining process will start from here
- getPatterns()
Complete set of patterns will be retrieved with this function
- savePatterns(oFile)
Complete set of frequent patterns will be loaded in to an output file
- getPatternsAsDataFrame()
Complete set of frequent patterns will be loaded in to a dataframe
- getMemoryUSS()
Total amount of USS memory consumed by the mining process will be retrieved from this function
- getMemoryRSS()
Total amount of RSS memory consumed by the mining process will be retrieved from this function
- getRuntime()
Total amount of runtime taken by the mining process will be retrieved from this function
- candidateToFrequent(candidateList)
Generates frequent patterns from the candidate patterns
- frequentToCandidate(frequentList, length)
Generates candidate patterns from the frequent patterns
Methods to execute code on terminal
Format: (.venv) $ python3 SPADE.py <inputFile> <outputFile> <minSup> Example usage: (.venv) $ python3 SPADE.py sampleDB.txt patterns.txt 10.0 .. note:: minSup will be considered in times of minSup and count of database transactions
Importing this algorithm into a python program
import PAMI.sequentialPatternMining.basic.SPADE as alg obj = alg.SPADE(iFile, minSup) obj.startMine() sequentialPatternMining = obj.getPatterns() print("Total number of Frequent Patterns:", len(frequentPatterns)) obj.save(oFile) Df = obj.getPatternInDataFrame() memUSS = obj.getMemoryUSS() print("Total Memory in USS:", memUSS) memRSS = obj.getMemoryRSS() print("Total Memory in RSS", memRSS) run = obj.getRuntime() print("Total ExecutionTime in seconds:", run)
Credits:
The complete program was written by Suzuki Shota under the supervision of Professor Rage Uday Kiran.
- getMemoryRSS()[source]
Total amount of RSS memory consumed by the mining process will be retrieved from this function
- Returns:
returning RSS memory consumed by the mining process
- Return type:
float
- getMemoryUSS()[source]
Total amount of USS memory consumed by the mining process will be retrieved from this function
- Returns:
returning USS memory consumed by the mining process
- Return type:
float
- getPatterns()[source]
Function to send the set of frequent patterns after completion of the mining process
- Returns:
returning frequent patterns
- Return type:
dict
- getPatternsAsDataFrame()[source]
Storing final frequent patterns in a dataframe
- Returns:
returning frequent patterns in a dataframe
- Return type:
pd.DataFrame
- getRuntime()[source]
Calculating the total amount of runtime taken by the mining process
- Returns:
returning total amount of runtime taken by the mining process
- Return type:
float
- make1LenDatabase()[source]
To make 1 length frequent patterns by breadth-first search technique and update Database to sequential database
- make2LenDatabase()[source]
To make 2 length frequent patterns by joining two one length patterns by breadth-first search technique and update xlen Database to sequential database
- make3LenDatabase()[source]
To call each 2 length patterns to make 3 length frequent patterns depth-first search technique
- makeNextRow(bs, latestWord, latestWord2)[source]
To make pattern row when two patterns have the latest word in different sequence
:param bs : previous pattern without the latest one :param latestWord : latest word of one previous pattern :param latestWord2 : latest word of other previous pattern
- makeNextRowSame(bs, latestWord, latestWord2)[source]
To make pattern row when one pattern have the latestWord1 in different sequence and other(latestWord2) in same
:param bs : previous pattern without the latest one :param latestWord : latest word of one previous pattern in same sequence :param latestWord2 : latest word of other previous pattern in different sequence
- makeNextRowSame2(bs, latestWord, latestWord2)[source]
To make pattern row when two patterns have the latest word in same sequence
:param bs : previous pattern without the latest one :param latestWord : latest word of one previous pattern :param latestWord2 : latest word of the other previous pattern
- makeNextRowSame3(bs, latestWord, latestWord2)[source]
To make pattern row when two patterns have the latest word in different sequence and both latest word is in same sequence
:param bs : previous pattern without the latest one :param latestWord : latest word of one previous pattern :param latestWord2 : latest word of other previous pattern
- makexLenDatabase(rowLen, bs, latestWord)[source]
To make “rowLen” length frequent patterns from pattern which the latest word is in same seq by joining “rowLen”-1 length patterns by depth-first search technique and update xlenDatabase to sequential database
- Parameters:
rowLen – row length of patterns.
:param bs : patterns without the latest one :param latestWord : latest word of patterns
- makexLenDatabaseSame(rowLen, bs, latestWord)[source]
To make 3 or more length frequent patterns from pattern which the latest word is in different seq by depth-first search technique and update xlenDatabase to sequential database
- Parameters:
rowLen – row length of previous patterns.
:param bs : previous patterns without the latest one :param latestWord : latest word of previous patterns
PAMI.sequentialPatternMining.basic.SPAM module
- class PAMI.sequentialPatternMining.basic.SPAM.SPAM(iFile, minSup, sep='\t')[source]
Bases:
_sequentialPatterns
- Description:
SPAM is one of the fundamental algorithm to discover sequential frequent patterns in a transactional database. This program employs SPAM property (or downward closure property) to reduce the search space effectively. This algorithm employs breadth-first search technique to find the complete set of frequent patterns in a sequential database.
- Reference:
Ayres, J. Gehrke, T.Yiu, and J. Flannick. Sequential Pattern Mining Using Bitmaps. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Edmonton, Alberta, Canada, July 2002.
- Parameters:
iFile – str : Name of the Input file to mine complete set of Sequential frequent patterns
oFile – str : Name of the output file to store complete set of Sequential frequent patterns
minSup – float or int or str : minSup measure constraints the minimum number of transactions in a database where a pattern must appear Example: minSup=10 will be treated as integer, while minSup=10.0 will be treated as float
sep – str : This variable is used to distinguish items from one another in a transaction. The default seperator is tab space. However, the users can override their default separator.
- Attributes:
- iFilestr
Input file name or path of the input file
- oFilestr
Name of the output file or the path of output file
- minSupfloat or int or str
The user can specify minSup either in count or proportion of database size. If the program detects the data type of minSup is integer, then it treats minSup is expressed in count. Otherwise, it will be treated as float. Example: minSup=10 will be treated as integer, while minSup=10.0 will be treated as float
- sepstr
This variable is used to distinguish items from one another in a transaction. The default seperator is tab space or . However, the users can override their default separator.
- startTimefloat
To record the start time of the mining process
- endTimefloat
To record the completion time of the mining process
- finalPatternsdict
Storing the complete set of patterns in a dictionary variable
- memoryUSSfloat
To store the total amount of USS memory consumed by the program
- memoryRSSfloat
To store the total amount of RSS memory consumed by the program
- Databaselist
To store the sequences of a database in list
- _idDatabasedict
To store the sequences of a database by bit map
- _maxSeqLen:
the maximum length of subsequence in sequence.
- Methods:
- _creatingItemSets():
Storing the complete sequences of the database/input file in a database variable
- _convert(value):
To convert the user specified minSup value
- make2BitDatabase():
To make 1 length frequent patterns by breadth-first search technique and update Database to sequential database
- DfsPruning(items,sStep,iStep):
the main algorithm of spam. This can search sstep and istep items and find next patterns, its sstep, and its istep. And call this function again by using them. Recursion until there are no more items available for exploration.
- Sstep(s):
To convert bit to ssteo bit.The first time you get 1, you set it to 0 and subsequent ones to 1.(like 010101=>001111, 00001001=>00000111)
- mine()
Mining process will start from here
- getPatterns()
Complete set of patterns will be retrieved with this function
- savePatterns(oFile)
Complete set of frequent patterns will be loaded in to a output file
- getPatternsAsDataFrame()
Complete set of frequent patterns will be loaded in to a dataframe
- getMemoryUSS()
Total amount of USS memory consumed by the mining process will be retrieved from this function
- getMemoryRSS()
Total amount of RSS memory consumed by the mining process will be retrieved from this function
- getRuntime()
Total amount of runtime taken by the mining process will be retrieved from this function
- candidateToFrequent(candidateList)
Generates frequent patterns from the candidate patterns
- frequentToCandidate(frequentList, length)
Generates candidate patterns from the frequent patterns
Executing the code on terminal:
Format: (.venv) $ python3 SPAM.py <inputFile> <outputFile> <minSup> (<separator>) Examples usage: (.venv) $ python3 SPAM.py sampleDB.txt patterns.txt 10.0 .. note:: minSup will be considered in times of minSup and count of database transactions
Sample run of the importing code:
import PAMI.sequentialPatternMining.basic.SPAM as alg
obj = alg.SPAM(iFile, minSup)
obj.mine()
sequentialPatternMining = obj.getPatterns()
print(“Total number of Frequent Patterns:”, len(frequentPatterns))
obj.savePatterns(oFile)
Df = obj.getPatternInDataFrame()
memUSS = obj.getMemoryUSS()
print(“Total Memory in USS:”, memUSS)
memRSS = obj.getMemoryRSS()
print(“Total Memory in RSS”, memRSS)
run = obj.getRuntime()
print(“Total ExecutionTime in seconds:”, run)
Credits:
The complete program was written by Shota Suzuki under the supervision of Professor Rage Uday Kiran.
- DfsPruning(items, sStep, iStep)[source]
the main algorithm of spam. This can search sstep and istep items and find next patterns, its sstep, and its istep. And call this function again by using them. Recursion until there are no more items available for exploration.
- Attributes:
- itemsstr
The pattrens I got before
- sSteplist
Items presumed to have “sstep” relationship with “items”.(sstep is What appears later like a-b and a-c)
- iSteplist
Items presumed to have “istep” relationship with “items”(istep is What appears in same time like ab and ac)
- Sstep(s)[source]
To convert bit to Sstep bit.The first time you get 1, you set it to 0 and subsequent ones to 1.(like 010101=>001111, 00001001=>00000111)
- :param s:list
to store each bit sequence
- Returns:
nextS:list to store the bit sequence converted by sstep
- countSup(n)[source]
count support
- :param n:list
to store each bit sequence
- Returns:
count: int support of this list
- getMemoryRSS()[source]
Total amount of RSS memory consumed by the mining process will be retrieved from this function :return: returning RSS memory consumed by the mining process :rtype: float
- getMemoryUSS()[source]
Total amount of USS memory consumed by the mining process will be retrieved from this function :return: returning USS memory consumed by the mining process :rtype: float
- getPatterns()[source]
Function to send the set of frequent patterns after completion of the mining process :return: returning frequent patterns :rtype: dict
- getPatternsAsDataFrame()[source]
Storing final frequent patterns in a dataframe :return: returning frequent patterns in a dataframe :rtype: pd.DataFrame
- getRuntime()[source]
Calculating the total amount of runtime taken by the mining process :return: returning total amount of runtime taken by the mining process :rtype: float
- make2BitDatabase()[source]
To make 1 length frequent patterns by breadth-first search technique and update Database to sequential database
PAMI.sequentialPatternMining.basic.abstract module
PAMI.sequentialPatternMining.basic.prefixSpan module
- class PAMI.sequentialPatternMining.basic.prefixSpan.prefixSpan(iFile, minSup, sep='\t')[source]
Bases:
_sequentialPatterns
- Description:
Prefix Span is one of the fundamental algorithm to discover sequential frequent patterns in a transactional database.
This program employs Prefix Span property (or downward closure property) to reduce the search space effectively.
This algorithm employs depth-first search technique to find the complete set of frequent patterns in a transactional database.
- Reference:
Pei, J. Han, B. Mortazavi-Asl, J. Wang, H. Pinto, Q. Chen, U. Dayal, M. Hsu: Mining Sequential Patterns by Pattern-Growth: The PrefixSpan Approach. IEEE Trans. Knowl. Data Eng. 16(11): 1424-1440 (2004)
- Parameters:
iFile – str : Name of the Input file to mine complete set of Sequential frequent patterns
oFile – str : Name of the output file to store complete set of Sequential frequent patterns
minSup – float or int or str : minSup measure constraints the minimum number of transactions in a database where a pattern must appear Example: minSup=10 will be treated as integer, while minSup=10.0 will be treated as float
sep – str : This variable is used to distinguish items from one another in a transaction. The default seperator is tab space. However, the users can override their default separator.
- Attributes:
- iFilestr
Input file name or path of the input file
- oFilestr
Name of the output file or the path of output file
- minSupfloat or int or str
The user can specify minSup either in count or proportion of database size. If the program detects the data type of minSup is integer, then it treats minSup is expressed in count. Otherwise, it will be treated as float. Example: minSup=10 will be treated as integer, while minSup=10.0 will be treated as float
- sepstr
This variable is used to distinguish items from one another in a transaction. The default seperator is tab space or . However, the users can override their default separator.
- startTimefloat
To record the start time of the mining process
- endTimefloat
To record the completion time of the mining process
- finalPatternsdict
Storing the complete set of patterns in a dictionary variable
- memoryUSSfloat
To store the total amount of USS memory consumed by the program
- memoryRSSfloat
To store the total amount of RSS memory consumed by the program
- Databaselist
To store the transactions of a database in list
- Methods:
- mine()
Mining process will start from here
- getPatterns()
Complete set of patterns will be retrieved with this function
- savePatterns(oFile)
Complete set of frequent patterns will be loaded in to a output file
- getPatternsAsDataFrame()
Complete set of frequent patterns will be loaded in to a dataframe
- getMemoryUSS()
Total amount of USS memory consumed by the mining process will be retrieved from this function
- getMemoryRSS()
Total amount of RSS memory consumed by the mining process will be retrieved from this function
- getRuntime()
Total amount of runtime taken by the mining process will be retrieved from this function
- candidateToFrequent(candidateList)
Generates frequent patterns from the candidate patterns
- frequentToCandidate(frequentList, length)
Generates candidate patterns from the frequent patterns
Methods to execute code on terminal
Format: (.venv) $ python3 prefixSpan.py <inputFile> <outputFile> <minSup> Example usage: (.venv) $ python3 prefixSpan.py sampleDB.txt patterns.txt 10 .. note:: minSup will be considered in support count or frequency
Importing this algorithm into a python program
import PAMI.frequentPattern.basic.prefixSpan as alg obj = alg.prefixSpan(iFile, minSup) obj.startMine() frequentPatterns = obj.getPatterns() print("Total number of Frequent Patterns:", len(frequentPatterns)) obj.save(oFile) Df = obj.getPatternInDataFrame() memUSS = obj.getMemoryUSS() print("Total Memory in USS:", memUSS) memRSS = obj.getMemoryRSS() print("Total Memory in RSS", memRSS) run = obj.getRuntime() print("Total ExecutionTime in seconds:", run)
Credits:
The complete program was written by Suzuki Shota under the supervision of Professor Rage Uday Kiran.
- getMemoryRSS()[source]
Total amount of RSS memory consumed by the mining process will be retrieved from this function
- Returns:
returning RSS memory consumed by the mining process
- Return type:
float
- getMemoryUSS()[source]
Total amount of USS memory consumed by the mining process will be retrieved from this function
- Returns:
returning USS memory consumed by the mining process
- Return type:
float
- getPatterns()[source]
Function to send the set of frequent patterns after completion of the mining process
- Returns:
returning frequent patterns
- Return type:
dict
- getPatternsAsDataFrame()[source]
Storing final frequent patterns in a dataframe
- Returns:
returning frequent patterns in a dataframe
- Return type:
pd.DataFrame
- getRuntime()[source]
Calculating the total amount of runtime taken by the mining process
- Returns:
returning total amount of runtime taken by the mining process
- Return type:
float
- getSameSeq(startrow)[source]
To get words in the latest sequence
- Parameters:
startrow – the patterns get before
- makeNext(sepDatabase, startrow)[source]
To get next pattern by adding head word to next sequence of startrow
- Parameters:
sepDatabase – dict what words and rows startrow have to add it.
startrow – the patterns get before
- makeNextSame(sepDatabase, startrow)[source]
To get next pattern by adding head word to the latest sequence of startrow
- Parameters:
sepDatabase – dict what words and rows startrow have to add it
startrow – the patterns get before
- makeSeqDatabaseFirst(database)[source]
To make 1 length sequence dataset list which start from same word. It was stored only 1 from 1 line.
- Parameters:
database – To store the transactions of a database in list
- makeSeqDatabaseSame(database, startrow)[source]
To make sequence dataset list which start from same word(head). It was stored only 1 from 1 line. And it separated by having head in the latest sequence of startrow or not.
- Parameters:
database – To store the transactions of a database in list
startrow – the patterns get before
- makeSupDatabase(database, head)[source]
To delete not frequent words without words in the latest sequence
- Parameters:
database – list database of lines having same startrow and head word
- :param head:list
words in the latest sequence
- Returns:
changed database
- save(outFile)[source]
Complete set of frequent patterns will be loaded in to an output file
- Parameters:
outFile (csv file) – name of the output file
- serchSame(database, startrow, give)[source]
To get 2 or more length patterns in same sequence.
- Parameters:
database – list To store the transactions of a database in list which have same startrow and head word
startrow – list the patterns get before
give – list the word in the latest sequence of startrow