Skip to content

XCSP3 in tools #597

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 41 commits into
base: master
Choose a base branch
from
Draft

XCSP3 in tools #597

wants to merge 41 commits into from

Conversation

Wout4
Copy link
Collaborator

@Wout4 Wout4 commented Feb 21, 2025

Should also have an easy benchmarking script.

@Wout4
Copy link
Collaborator Author

Wout4 commented Feb 21, 2025

todo: readme, add python an package requirements somewhere
maybe add a downloader of some xcsp3 instances (think we had that somewhere already)




def write_to_dataframe(lock, model_name, t_solve, t_transform, df_path):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add solvername

Copy link
Collaborator

@tias tias left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some general comments:

  • the has_subexprs should be mainlined in a separate pull request?
  • I think we should rename the files a bit to set better expectations:
    • callbacks: parser_callbacks? or pycsp3_callbacks?
    • installer: this is actually not an installer of pycsp3 or the parser... its about downloading instances I think? so, xcsp3_downloader?
    • solution.py: I have no idea... should just be part of run_model?
      I guess we might have 2 runners: a runner that is competition-ready (e.g. it accepts the args and writes in the format that the competition expects)
      and a runner that we can use to benchmark our code? e.g. that logs specific runtimes to a csv file?
    • xglobals... I guess this is included by the callbacks file? maybe just extra_globals.py or xcsp3_globals.py?

@tias
Copy link
Collaborator

tias commented Feb 26, 2025

also I'm a bit surprsied by the 'tests' subfolder and run_models being in that...

@tias
Copy link
Collaborator

tias commented Feb 26, 2025

cleaned up the printing a bit.

doesn't work:
Found 1 models in folder 'tests/models/'
Will use 7 parallel solves, with solver ortools...

  • running model tests/models/Fillomino-mini-5-0_c24.xml
    Error parsing: module 'cpmpy' has no attribute 'ShortTable'
    Error solving: cannot access local variable 't_parse' where it is not associated with a value

Ran models for 0 minutes
Warning: no variables in this model (and so, no generated file)!

it doesnt look like shorttable is already in the globals, though it does expect it?

@Wout4
Copy link
Collaborator Author

Wout4 commented Feb 28, 2025

the has_subexpr for table is indeed in another pr: #596
The solution.py was for the xcsp3 competition, I think I accidently added it here, not sure if we want those competition specific things in this pr.
Short table was merged last week so maybe you have to update first?

Will update the file names

@tias tias marked this pull request as draft April 7, 2025 12:48
@tias
Copy link
Collaborator

tias commented Apr 26, 2025

Good, but the downloader is a hack...

I think we should follow Serdar's opinion piece: https://osullivan.ucc.ie/CPML2025/papers/kadioglu.pdf
which already is very much in line with our philosophy, but he makes the point for having dataset access like we have in ML tools.

E.g torch has a very elegant dataset class (I'm not suggesting we import torch, but we could mimic such an abstract class):
https://pytorch.org/tutorials/beginner/basics/data_tutorial.html#creating-a-custom-dataset-for-your-files

where a dataset just needs to implement init, len and getitim...

e.g. we could imagine a class like

class XCSP3Comp(AbsDataset):
  def __init__(self, root='.', year='2024', category='opt', transform=None, target_transform=None)

which would do the downloading like their MNIST dataset etc do... getitem would return (y, x) where y = whatever metadata xcsp3 has about it (e.g. optimal value, name, category, ...), and 'x' is the xcsp3 string xml

what's nice, is that we could then imagine a function parse_xcsp3 which would take a xcsp3 string xml as input and returns a cpmpy expression tree or so? then we can do

D = XCSP3Comp(year='2024', category='opt', transform=parse_xcsp3)
... some parallel for (_, (model,vars)) in D:
    model.solve()
    ...

or something? This is just a brainstorm, esp the parse_xcsp3 as a transform... but instead of the current downloader, creating a generic non-CPMpy specific dataset object following the torch style would be great...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants