I know that a-parser has the ability to remove duplicates while it is scraping results. But I will run multiple tasks and output them to the same file. I will almost always have duplicates in the output. Having this as a task would be nice. I can then queue my Google::Suggest scraping tasks and the last task would remove duplicates. Feature Remove Duplicates Task (Utility::Remove Duplicates) Description Select a file in the /results folder and remove duplicate lines.
You can use Keep unique option: in first task create new keep unique, and in second, third... use previous created Keep unique then you don't have duplicates beetwen results files also Base manager(with a lot of features) have been planned in future(russian thread: http://a-parser.com/threads/531/)
Thanks for the clarification. I didn't understand how to use the Keep unique option. But I think I understand now. How do you delete the Keep unique options once you are done with them? I don't want to keep building a longer and longer list.